For Education Purpose:Please download and use in slideshow mode for better view.
The hex code F628 is transmitted in every frame of every STS-1.
This allows a receiver to locate the alignment of the 125usec frame within the received serial bit stream. Initially, the receiver scans the serial stream for the code F628. Once it is detected, the receiver watches to verify that the pattern repeats after exactly 810 STS-1 bytes, after which it can declare the frame found. Once the frame alignment is found, the remaining signal can be descrambled and the various overhead bytes extracted and processed.
Just prior to transmission, the entire SONET signal, with the exception of the framing bytes and the section trace byte, is scrambled. Scrambling randomizes the bit stream is order to provide sufficient 0-1 and 1-0 transitions for the receiver to derive a clock with which to receive the digital information.
Scrambling is performed by XORing the data signal with a pseudo-random bit sequence generated by the scrambler polynomial indicated above. The scrambler is frame synchronous, which means that it starts every frame in the same state.
Descrambling is performed by the receiver by XORing the received signal with the same pseudo random bit sequence. Note that since the scrambler is frame synchronous, the receiver must have found the frame alignment before the signal can be descrambled. That is why the frame bytes are not scrambled.
One more interesting answer from Standardisation Expert ITU-T Huub van Helvoort:-
The initial “F” is to provide four consecutive bits for the clock recovery circuit to lock the clock. The number of “0” and “1”in F628 is equal (=8) to take care that there is a DC balance in the signal (after line coding).
It has nothing to do with the scrambling, the pattern may even occur in the scrambled signal, however it is very unlikely that that occurs exactly every 125 usec so there will not be a false
lock to this pattern.
Explanation to Huub’s dc balancing w.r.t NEXT GENERATION TRANSPORT NETWORKS Data, Management, and Control Planes by Manohar Naidu Ellanti:
A factor that was not initially anticipated when the Al and A2 bit patterns were chosen was the potential affect on the laser for higher rate signals. For an ST S-N signal, the frame begins with N adjacent Al bytes followed by N adjacent A2 bytes. Note that for Al, there are more ls than Os, which means that the laser is on for a higher percentage of the time when the Al byte than for A2, in which there are more Os than ls. The A2 byte has fewer ls than Os. As a result, if the laser is directly modulated, for large values of N, the lack of balance between Os and 1s causes the transmitting laser to become hotter during the string of A 1 bytes and cooler during the string of A2 bytes. The thermal drift affects the laser performance such that the signal level changes, making it difficult for the receiver threshold detector to track. Most high-speed systems have addressed this problem by using a laser that is continuously on and modulate the signal with a shutter after the laser.
We know that in SDH frame rate is fixed i.e. 125us.
But in case of OTN, it is variable rather frame size is fixed.
So, frame rate calculation for OTN could be done by following method:-
Frame Rate (us) =ODUk frame size(bits)/ODUk bit rate(bits/s)…………………………………….(1)
Also, we know that
STM16=OPU1==16*9*270*8*8000=2488320000 bits/s
Now assume that multiplicative factor (Mk)** for rate calculation of various rates
Mk= | OPUk=(238/239-k) | ODUk=239/(239-k) | OTUk=255/(239-k) |
Now, Master Formula to calculate bit rate for different O(P/D/T)Uk will be
Bit Rate for O(P/D/T)Uk b/s =Mk*X*STM16=Mk*X*2488320000 b/s………..(2)
Where X=granularity in order of STM16 for OTN bit rates(x=1,4,40,160)
Putting values of equation(2) in equation(1) we will get OTN frame rates.
Eg:-
For further queries revert:)
**Multiplicative factor is just a simple math :eg. for ODU1/OPU1=3824/3808={(239*16)/(238*16)}
Here value of multiplication factor will give the number of times for rise in the frame size after adding header/overhead.
As we are using Reed Soloman(255,239) i.e we are dividing 4080bytes in sixteen frames (The forward error correction for the OTU-k uses 16-byte interleaved codecs using a Reed- Solomon S(255,239) code. The RS(255,239) code operates on byte symbols.).
Hence 4080/16=255…I have understood it you need to do simpler maths to understand..)
>> SPECIFICATION COMPARISION
>CHARACTERISTICS COMPARISON
>>WDM BAND
*****************************************************************************************************************************References
EDFA-Erbium Doped Fiber Amplifier
EDFA block
RAMAN AMPLIFIER
SOA-Semiconductor Optical Amplifier
Latency in Fiber Optic Networks
As we are very much aware that Internet traffic is growing very fast. The more information we are transmitting the more we need to think about parameters like available bandwidth and latency. Bandwidth is usually understood by end-users as the important indicator and measure of network performance. It is surely a reliable figure of merit, but it mainly depends on the characteristics of the equipment. Unlike bandwidth, latency and jitter depend on the specific context of transmission network topology and traffic conditions.
Latency we understand delay from the time of packet transmission at the sender to the end of packet reception at the receiver. If latency is too high it spreads data packets over the time and can create an impression that an optical metro network is not operating at data transmission speed which was expected. Data packets are still being transported at the same bit rate but due to latency they are delayed and affect the overall transmission system performance.
It should be pointed out, that there is need for low latency optical networks in almost all industries where any data transmission is realized. It is becoming a critical requirement for a wide set of applications like financial transactions, videoconferencing, gaming, telemedicine and cloud services which requires transmission line with almost no delay performance. These industries are summarized and shown in table below, please see Table 1.
Table 1. Industries where low latency services are very important .
In fiber optical networks latency consists of three main components which adds extra time delay:
- the optical fiber itself,
- optical components
- opto-electrical components.
Therefore, for the service provider it is extremely important to choose best network components and think on efficient low latency transport strategy.
Latency is a critical requirement for a wide set of applications mentioned above. Even latency of 250 ns can make the difference between winning and losing a trade. Latency reduction is very important in financial sector, for example, in the stock exchange market where 10 ms of latency could potentially result in a 10% drop in revenues for a company. No matter how fast you can execute a trade command, if your market data is delayed relative to competing traders, you will not achieve the expected fill rates and your revenue will drop. Low latency trading has moved from executing a transaction within several seconds to milliseconds, microseconds, and now even to nanoseconds.
LATENCY SOURCES IN OPTICAL NETWORKS
Latency is a time delay experienced in system and it describes how long it takes for data to get from transmission side to receiver side. In a fiber optical communication systems it is essentially the length of optical fiber divided by the speed of light in fiber core, supplemented with delay induced by optical and electro optical elements plus any extra processing time required by system, also called overhead.Signal processing delay can be reduced by using parallel processing based on large scale integration CMOS technologies.
Added to the latency due to propagation in the fiber, there are other path building blocks that affect the total data transport time. These elements include
- opto-electrical conversion,
- switching and routing,
- signal regeneration,
- amplification,
- chromatic dispersion (CD) compensation,
- polarization mode dispersion (PMD) compensation,
- data packing, digital signal processing (DSP),
- protocols and addition forward error correction (FEC)
Data transmission speed over optical metro network must be carefully chosen. If we upgrade 2.5 Gbit/s link to 10 Gbit/s link then CD compensation or amplification may be necessary, but it also will increase overall latency. For optical lines with transmission speed more than 10 Gbit/s (e.g. 40 Gbit/s) a need for coherent detection arises. In coherent detection systems CD can be electrically compensated using DSP which also adds latency. Therefore, some companies avoid using coherent detection for their low-latency network solutions.
From the standpoint of personal communications, effective dialogue requires latency < 200 ms, an echo needs > 80 ms to be distinguished from its source, remote music lessons require latency < 20 ms, and remote performance < 5 ms. It has been reported that in virtual environments, human beings can detect latencies as low as 10 to 20 ms. In trading industry or in telehealth every microsecond matters. But in all cases, the lower latency we can get the better system performance will be.
Single mode optical fiber
In standard single-mode fiber, a major part of light signal travels in the core while a small amount of light travels in the cladding. Optical fiber with lower group index of refraction provides an advantage in low latency applications.It is useful to use a parameter “effective group index of refraction (neff) instead of “index of refraction (n)” which only defines the refractive index of core or cladding of single mode fiber. The neff parameter is a weighted average of all the indices of refraction encountered by light as it travels within the fiber, and therefore it represents the actual behavior of light within a given fiber.The impact of profile shape on neff by comparing its values for several Corning single mode fiber (SMF) products with different refractive index profiles is illustrated in Fig. 2.
Figure 2. Effective group index of refraction impact of various commercially available Corning single mode fiber types.
It is known that speed of light in vacuum is 299792.458 km/s. Assuming ideal propagation at the speed of light in vacuum, an unavoidable latency value can be calculated as following in Equation (1):
However, due to the fiber’s refractive index light travels more slowly in optical fiber than in vacuum. In standard single mode fiber defined by ITU-T G.652 recommendation the effective group index of refraction (neff), for example, can be equal to 1.4676 for transmission on 1310 nm and 1.4682 for transmission on 1550 nm wavelength. By knowing neff we can express the speed of light in selected optical fiber at 1310 and 1550 nm wavelengths, see Equations (2) and (3):
By knowing speed of light in optical fiber at different wavelengths (see Equation (2) and (3) ) optical delay which is caused by 1 km long optical fiber can be calculated as following:
As one can see from Equations (4) and (5), propagation delay of optical signal is affected not only by the fiber type with certain neff, but also with the wavelength which is used for data transmission over fiber optical network. It is seen that optical signal delay values in single mode optical fiber is about 4.9 μs. This value is the practically lower limit of latency achievable for 1 km of fiber in length if it were possible to remove all other sources of latency caused by other elements and data processing overhead.
Photonic crystal fibers (PCFs) can have very low effective refractive index, and can propagate light much faster than in SMFs. For example, hollow core fiber (HCF) may provide up to 31% reduced latency relative to traditional fiber optics. But there is a problem that attenuation in HCF fibers is much higher compared to already implemented standard single mode fibers (for SMF α=0.2 dB/km but for HCF α=3.3 dB/km at 1550 nm). However, it is reported even 1.2 dB/km attenuation obtained in hollow-core photonic crystal fiber.
Chromatic Dispersion Compensation
Chromatic dispersion (CD) occurs because different wavelengths of light travel at different speeds in optical fiber. CD can be compensated by dispersion compensation module (DCM) where dispersion compensating fiber (DCF) or fiber Bragg grating (FBG) is employed.
A typical long reach metro access fiber optical network will require DCF approximately 15 to 25% of the overall fiber length. It means that use of DCF fiber adds about 15 to 25% to the latency of the fiber.For example, 100 km long optical metro network where standard single mode fiber (SMF) is used, can accumulate chromatic dispersion in value about 1800 ps/nm at 1550 nm wavelength.For full CD compensation is needed about 22.5 km long DCF fiber spool with large negative dispersion value (typical value is -80 ps/nm/km).If we assume that light propagation speed in DCF fiber is close to speed in SMF then total latency of 100 km long optical metro network with CD compensation using DCF DCM is about 0.6 ms.
Solution for how to avoid need for chromatic dispersion compensation or reduce the length of necessary DCF fiber is to use optical fiber with lower CD coefficient. For example, non-zero dispersion shifted fibers (NZ-DSFs) were developed to simplify CD compensation while making a wide band of channels available. NZ-DSF fiber parameters are defined in ITU-T G.655 recommendation. Today NZ-DSF fibers are optimized for regional and metropolitan high speed optical networks operating in the C- and L- optical bands. For C band it is defined that wavelength range is from 1530 to 1565 nm, but for L band it is from 1565 to 1625 nm.
For commercially available NZ-DSF fiber chromatic dispersion coefficient can be from 2.6 to 6.0 ps/nm/km in C-band and from 4.0 to 8.9 ps/nm/km in L-band. At 1550 nm region typical CD coefficient is about 4 ps/nm/km for this type of fiber. It can be seen that for G.655 NZ-DSF fiber CD coefficient is about four times lower than for standard G.652 SMF fiber.Since these fibers have lower dispersion than conventional single mode, simpler modules are used that add only up to 5% to the transmission time for NZ-DSF.7 This enables a lower latency than using SMF fiber for transmission. Another solution how to minimize need for extra CD compensation or reduce it to the necessary minimum is dispersion shifted fiber (DSF) which is specified in ITU-T G.653 recommendation. This fiber is optimized for use in 1550 nm region and has no chromatic dispersion at 1550 nm wavelength. Although, it is limited to single-wavelength operation due to non-linear four wave mixing (FWM), which causes optical signal distortions.
If CD is unavoidable another technology for compensation of accumulated CD is a deployment of fiber Bragg gratings (FBG). DCM with FBG can compensate several hundred kilometers of CD without any significant latency penalty and effectively remove all the additional latency that DCF-based networks add.In other words, a lot of valuable microseconds can be gained by migrating from DCF DCM to FBG DCM technology in optical metro network.Typical fiber length in an FBG used for dispersion compensation is about 10 cm. Therefore, normally FBG based DCM can introduce from 5 to 50 ns delay in fiber optical transmission line.
One of solutions how to avoid implementation of DCF DCM which introduces addition delay is coherent detection where complex transmission formats such as quadrature phase-shift keying (QPSK) can be used. However, it must be noticed that it can be a poor choice from a latency perspective because of the added digital signal processing (DSP) time it require. This additional introduced delay can be up to 1 μs.
Optical amplifiers
Another key optical component which adds additional time delay to optical transmission line is optical amplifier. Erbium doped fiber amplifiers (EDFA) is widely used in fiber optical access and long haul networks. EDFA can amplify signals over a band of almost 30 to 35 nm extending from 1530 to1565 nm, which is known as the C-band fiber amplifier, and from 1565 to 1605 nm, which is known as the L-band EDFA.The great advantage of EDFAs is that they are capable of amplifying many WDM channels simultaneously and there is no need to amplify each individual channel separately. EDFAs also remove the requirement for optical-electrical-optical (OEO) conversion, which is highly beneficial from a low-latency perspective. However it must be taken into account that EDFA contains few meters of erbium-doped optical fiber (Er3+) which adds extra latency, although this latency amount is small compared with other latency contributors. Typical EDFA amplifier contains up to 30 m long erbium doped fiber. These 30 m of additional fiber add 147 ns (about 0.15 μs) time delay.
Solution to how to avoid or reduce extra latency if amplification is necessary is use of Raman amplifier instead of EDFA or together (in tandem) with EDFA. This combination provides maximal signal amplification with minimal latency. Raman amplifiers use a different optical characteristic to amplify the optical signal.Raman amplification is realized by using stimulated Raman scattering. The Raman gain spectrum is rather broad, and the peak of the gain is centered about 13 THz (100 nm in wavelength) below the frequency of the pump signal used. Pumping a fiber using a high-power pump laser, we can provide gain to other signals, with a peak gain obtained 13 THz below the pump frequency. For example, using pumps around 1460–1480 nm wavelength provides Raman gain in the 1550–1600 nm window, which partly cover C and L bands. Accordingly, we can use the Raman effect to provide gain at any wavelength we want to amplify. The main benefit regarding to latency is that Raman amplifier pump optical signal without adding fiber to the signal path, therefore we can assume that Raman amplifier adds no latency.
Transponders and opto-electrical conversion
Any transmission line components which are performing opto-electrical conversion increase total latency. One of key elements used in opto-electrical conversion are transponders and muxponders. Transponders convert incoming signal from the client to a signal suitable for transmission over the WDM link and an incoming signal from the WDM link to a suitable signal toward the client.Muxponder basically do the same as transponder except that it has additional option to multiplex lower rate signals into a higher rate carrier (e.g. 10 Gbit/s services up to 40 Gbit/s transport) within the system in such a way saving valuable wavelengths in the optical metro network.
The latency of both transponders and muxponders varies depending on design, functionality, and other parameters. Muxponders typically operate in the 5 to 10 μs range per unit. The more complex transponders include additional functionality such as in-band management channels. This complexity forces the unit design and latency to be very similar to a muxponder, in the 5 to 10 μs range. If additional FEC is used in these elements then latency value can be higher.Several telecommunications equipment vendors offer simpler and lower-cost transponders that do not have FEC or in-band management channels or these options are improved in a way to lower device delay. These modules can operate at much lower latencies, from 4 ns to 30 ns. Some vendors also claim that their transponders operate with 2 ns latency which is equivalent to adding about a half meter of SMF to fiber optical path.
Optical signal regeneration
For low latency optical metro networks it is very important to avoid any regeneration and focus on keeping the signal in the optical domain once it is entered the fiber. An optical-electronic-optical (OEO) conversion takes about 100 μs, depending on how much processing is required in the electrical domain. Ideally a carrier would like to avoid use of FEC or full 3R (reamplification, reshaping, retiming) regeneration. 3R regeneration needs OEO conversion which adds unnecessary time delay. Need for optical signal regeneration is determined by transmission data rate involved, whether dispersion compensation or amplification is required, and how many nodes the signal must pass through along the fiber optical path.
Forward error correction and digital signal processing
It is necessary to minimize the amount of electrical processing at both ends of fiber optical connection. FEC, if used (for example, in transponders) will increase the latency due to the extra processing time. This approximate latency value can be from 15 to 150 μs based on the algorithm used, the amount of overhead, coding gain, processing time and other parameters.
Digital signal processing (DSP) can be used to deal with chromatic dispersion (CD), polarization mode dispersion (PMD) and remove critical optical impairments. But it must be taken into account that DSP adds extra latency to the path. It has been mentioned before that this additional introduced delay can be up to 1 μs.
Latency regarding OSI Levels
Latency is not added only by the physical medium but also because of data processing implemented in electronic part of fiber optical metro network (basically transmitter and receiver). All modern networks are based upon the Open System Interconnection (OSI) reference model which consists of a 7 layer protocol stack, see Fig. 3.
Figure 3. OSI reference model illustrating (a) total latency increase over each layer and (b) data way passing through all protocol layers in transmitter and receiver.
SUMMARY
Latency sources in optical metro network and typical induced time delay values.
Introduction
Automatic Protection Switching (APS) is one of the most valuable features of SONET and SDH networks. Networks with APS quickly react to failures, minimizing lost traffic, which minimizes lost revenue to service providers. The network is said to be “self healing.” This application note covers how to use Sunrise Telecom SONET and SDH analyzers to measure the amount of time it takes for a network to complete an automatic protection switchover. This is important since ANSI T1.105.1 and ITU-T G.841 require that a protection switchover occur within 50 ms. To understand the APS measurement, a brief review is first given. This is followed by an explanation of the basis behind the APS measurement. The final section covers how to operate your Sunrise Telecom SONET and SDH equipment to make an APS time measurement.
What Is APS?
Automatic Protection Switching keeps the network working even if a network element or link fails. The Network Elements (NEs) in a SONET/SDH network constantly monitor the health of the network. When a failure is detected by one or more network elements, the network proceeds through a coordinated predefined sequence of steps to transfer (or switchover) live traffic to the backup facility (also called “protection” facility). This is done very quickly to minimize lost traffic. Traffic remains on the protection facility until the primary facility (working facility) fault is cleared, at which time the traffic may revert to the working facility.
In a SONET or SDH network, the transmission is protected on optical sections from the Headend (the point at which
the Line/Multiplexer Section Overhead is inserted) to the Tailend (the point where the Line/Multiplexer Section Overhead is terminated).
The K1 and K2 Line/Multiplexer Section Overhead bytes carry an Automatic Protection Switching protocol used to coordinate protection switching between the headend and the tailend.
The protocol for the APS channel is summarized in Figure 1. The 16 bits within the APS channel contain information on the APS configuration, detection of network failure, APS commands, and revert commands. When a network failure is detected, the Line/Multiplexer Section Terminating Equipment communicates and coordinates the protection switchover by changing certain bits within the K1 & K2 bytes.
During the protection switchover, the network elements signal an APS by sending AIS throughout the network. AIS is also present at the ADM drop points. The AIS condition may come and go as the network elements progress through their algorithm to switch traffic to the protection circuit.
AIS signals an APS event. But what causes the network to initiate an automatic protection switchover? The three most common are:
- • Detection of AIS (AIS is used to both initiate and signal an APS event)
- • Detection of excessive B2 errors
- • Initiation through a network management terminal
According to GR-253 and G.841, a network element is required to detect AIS and initiate an APS within 10 ms. B2 errors should be detected according to a defined algorithm, and more than 10 ms is allowed. This means that the entire time for both failure detection and traffic restoration may be 60 ms or more (10 ms or more detect time plus 50 ms switch time).
Protection Architectures
There are two types of protection for networks with APS:
- Linear Protection, based on ANSI T1.105.1 and ITU-T G.783 for point-to-point (end-to-end) connections.
- Ring Protection, based on ANSI T1.105.1 and ITU-T G.841 for ring structures (ring structures can also be found with two
types of protection mechanisms – Unidirectional and Bidirectional rings).
Refer to Figures 2-4 for APS architectures and unidirectional vs. bidirectional rings.
Protection Switching Schemes
The two most common schemes are 1+1 protection switching and 1:n protection switching. In both structures, the K1 byte contains both the switching preemption priorities (in bits 1 to 4), and the channel number of the channel requesting action (in bits 5 to 8). The K2 byte contains the channel number of the channel that is bridged onto protection (in bits 1 to 4), and the mode type (in bit 5); bits 6 to 8 contain various conditions such as AIS-L, RDI-L, indication of unidirectional or bidirectional switching.
1+1
In 1+1 protection switching, there is a protection facility (backup line) for each working facility. At the headend, the optical signal is bridged permanently (split into two signals) and sent over both the working and the protection facilities simultaneously, producing a working signal and a protection signal that are identical. At the tailend, both signals are monitored independently for failures. The receiving equipment selects either the working or the protection signal. This selection is based on the switch initiation criteria which are either a signal fail (hard failure such as the loss of frame (LOF) within an optical signal), or a signal degrade (soft failure caused by a bit error ratio exceeding some predefined value).
Refer to Figure 5.
Normally, 1+1 protection switching is unidirectional, although if the line terminating equipment at both ends supports bidirectional switching, the unidirectional default can be overridden. Switching can be either revertive (the flow reverts to the working facility as soon as the failure has been corrected) or nonrevertive (the protection facility is treated as the working facility)
In 1+1 protection architecture, all communications from the headend to the tailend are carried out over the APS channel via the K1 and K2 bytes. In 1+1 bidirectional switching, the K2 byte signaling indicates to the headend that a facility has been switched so that it can start to receive on the now active facility.
1:n
In 1:n protection switching, there is one protection facility for several working facilities (the range is from 1 to 14) and all communications from the headend to the tailend are carried out over the APS channel via the K1 and K2 bytes. All switching is revertive, that is, the traffic reverts back to the working facility as soon as the failure has been corrected. Refer to Figure 6.
Optical signals are normally sent only over the working facilities, with the protection facility being kept free until a working facility fails. Let us look at a failure in a bidirectional architecture. Suppose the tailend detects a failure on working facility 2. The tailend sends a message in bits 5-8 of the K1 byte to the headend over the protection facility requesting switch action. The headend can then act directly, or if there is more than one problem, the headend decides which is top priority. On a decision to act on the problem on working facility 2, the headend carries out the following steps:
- Bridges working facility 2 at the headend to the protection facility.
- Returns a message on the K2 byte indicating the channel number of the traffic on the protection channel to the tailend
- Sends a Reverse Request to the tailend via the K1 byte to initiate bidirectional switch.
On receipt of this message, the tailend carries out the following steps:
-
- Switches to the protection facility to receive
- Bridges working facility 2 to the protection facility to transmit back
Now transmission is carried out over the new working facility.
Switch Action Comments
In unidirectional architecture, the tailend makes the decision on priorities. In bidirectional architecture, the headend makes the decision on priorities. If there is more than one failure at a time, a priority hierarchy determines which working facility will be backed up by the protection facility. The priorities that are indicated in bits 1-4 of the K1 byte are as follows:
- Lockout
- Forced switch (to protection channel regardless of its state) for span or ring; applies only to 1:n switching
- Signal fails (high then low priority) for span or ring
- Signal degrades (high then low priority) for span or ring; applies only to 1:n switching
- Manual switch (to an unoccupied fault-free protection channel) for span or ring; applies only to 1+1 LTE
- Wait-to-restore
- Exerciser for span or ring (may not apply to some linear APS systems)
- Reverse request (only for bidirectional)
- Do not revert (only 1+1 LTE provisioned for nonrevertive switching transmits Do Not Revert)
- No request
Depending on the protection architecture, K1 and K2 bytes can be decoded as shown in Tables 1-5.
Linear Protection (ITU-T G.783)
Ring Protection (ITU-T G.841)
collected from Sunrise Telecom APS application note..
GENERIC FRAMING PROTOCOL
The Generic Framing Protocol (GFP), defined in ITU-T G.7041, is a mechanism for mapping constant and variable bit rate data into the synchronous SDH/SONET envelopes. GFP support many types of protocols including those used in local area network (LAN) and storage area network (SAN). In any case GFP adds a very low overhead to increase the efficiency of the optical layer.
Currently, two modes of client signal adaptation are defined for GFP:
- Frame-Mapped GFP (GFP-F), a layer 2 encapsulation PDU-oriented adaptation mode. It is optimized for data packet protocols (e.g. Ethernet, PPP, DVB) that are encapsulated onto variable size frames.
- Transparent GFP (GFP-T), a layer 1 encapsulation or block-code oriented adaptation mode. It is optimized for protocols using 8B/10B physical layer (e.g. Fiber Channel, ESCON, 1000BASE-T) that are encapsulated onto constant size frames.
GFP could be seen as a method to deploy metropolitan networks, and simultaneously to support mainframes and server storage protocols.
Data packet aggregation using GFP. Packets are in queues waiting to be mapped onto a TDM channel. At the far-end packets are drop again to a queue and delivered. GFP frame multiplexing and sub multiplexing. The figure shows the encapsulation mechanism and the transport of the GFP frames into VC containers embedded in the STM frames
Figure GFP frame formats and protocols
Framed-mapped GFP
In Frame-mapped GFP (GFP-F) one complete client packet is entirely mapped into one GFP frame. Idle packets are not transmitted resulting in more efficient transport. However, specific mechanisms are required to transport each type of protocol .
Figure GFP mapping clients format
GFP-F can be used for Ethernet, PPP/IP and HDLC-like protocols where efficiency and flexibility are important. To perform the encapsulation process it is necessary to receive the complete client packet, but this procedure increases the latency, making GFP-F inappropriate for time-sensitive protocols.
Transparent GFP,GFP-T
Transparent GFP (GFP-T) is a protocol-independent encapsulation method in which all client code words are decoded and mapped into fixed-length GFP frames The frames are transmitted immediately without waiting for the entire client data packet to be received. Therefore it is also a Layer 1 transport mechanism because all the client characters are moved to the far-end independently it does not matter if they are information, headers, control or any kind of overhead.
GFP-T can adapt multiple protocols using the same hardware as long as they are based on 8B/10B line coding. This line codes are transcoded to 64B/65B and then encapsulated into fixed size GFP-T frames. Everything is transported, including inter-frame gaps that can have flow control characters or any additional information.
GFP-T is very good for isocronic or delay sensitive-protocols, and also for Storage Area Networks (SAN) such as ECON or FICON. This is because it is not necessary to process client frames or to wait for arrival of the complete frame. This advantage is counteracted by lost efficiency, because the source MSPP node still generates traffic when no data is being received from the client.
GFP enables MSPP nodes to offer both TDM and packet-oriented services, managing transmission priorities and discard eligibility. GFP replaces legacy mappings, most of them of proprietary nature. In principal GFP is just an encapsulation procedure but robust and standardised for the transport of packetised data on SDH and OTN as well.
GFP uses a HEC-based delineation technique similar to ATM, it therefore does not need bit or byte stuffing. The frame size can be easily set up to a constant length.
When using GFP-F mode there is an optional GPF extension Header (eHEC) to be used by each specific protocol such us source/destination address, port numbers, class of service, etc. Among the EXI types – ‘linear’ supports submultiplexing onto a single path, ‘Channel ID’ (CID) enables sub-multiplexing over one VC channel GFP-F mode.
CONCATENATION
Concatenation is the process of summing the bandwidth of X containers (C-i) into a larger container. This provides a bandwidth X times bigger than C-i. It is well indicated for the transport of big payloads requiring a container greater than VC-4, but it is also possible to concatenate low-capacity containers, such as VC-11,VC-12, or VC-2.
Figure An example of contiguous concatenation and virtual concatenation. Contiguous concatenation requires support by all the nodes. Virtual concatenation allocates bandwidth more efficiently, and can be supported by legacy installations.
There are two concatenation methods:
- Contiguous concatenation: which creates big containers that cannot split into smaller pieces during transmission. For this, each NE must have a concatena- tion functionality.
- Virtual concatenation: which transports the individual VCs and aggregates them at the end point of the transmission path. For this, concatenation functionality is only needed at the path termination equipment.
Contiguous Concatenation of VC-4
A VC-4-Xc provides a payload area of X containers of C-4 type. It uses the same HO-POH used in VC-4, and with identical functionality. This structure can be transported in an STM-n frame (where n = X). However, other combinations are also possible; for instance, VC-4-4c can be transported in STM-16 and STM-64 frames. Concatenation guarantees the integrity of a bit sequence, because the whole container is transported as a unit across the whole network .
Obviously, an AU-4-Xc pointer, just like any other AU pointer, indicates the position of J1, which is the first byte of the VC-4-Xc container. The pointer takes the same value as the AU-4 pointer, while the remaining bytes take fixed values equal to Y=1001SS11 to indicate concatenation. Pointer justification is carried out the same way for all the X concatenated AU-4s and X x 3 stuffing bytes .
However, contiguous concatenation, today, is more theory than practice, since other alternatives more bandwidth-efficient, such as virtual concatenation, are gaining more importance.
Virtual Concatenation
Connectionless and packet-oriented technologies, such as IP or Ethernet, do not match well the bandwidth granularity provided by contiguous concatenation. For example, to implement a transport requirement of 1 Gbps, it would be necessary to allocate a VC4-16c container, which has a 2.4-Gbps capacity. More than double the bandwidth that is needed.
Virtual concatenation (VCAT) is a solution that allows granular increments of bandwidth in single VC-n units. At the MSSP source node VCAT creates a continuous payload equivalent to X times the VC-n units . The set of X containers is known virtual container group (VCG) and each individual VC is a member of the VCG. All the VC members are sent to the MSSP destination node independently, using any available path if necessary. At the destination, all the VC-n are organized, according the indications provided by the H4 or the V5 byte, and finally delivered to the client
Differential delays between VCG member are likely because they are transported individually and may have used different paths with different latencies. Therefore, the destination MSSP must compensate for the different delays before reassembling the payload and delivering the service.
Virtual concatenation is required only at edge nodes, and is compatible with legacy SDH networks, despite the fact that they do not support concatenation. To get the full benefit from this, individual containers should be transported by different routes across the network, so if a link or a node fails the connection is only partially affected. This is also a way of providing a resilience service.
Higher Order Virtual Concatenation
Higher Order Virtual Concatenation (HO-VCAT) uses X times VC-3 or VC4 containers (VC-3/4-Xv, X = 1 … 256), providing a payload capacity of X times 48384 or 149760 kbit/s.
The virtual concatenated container VC-3/4-Xv is mapped in independent VC-3 or VC-4 envelopes that are transported individually through the network. Delays could occur between the individual VCs, this obviously has to be compensated for when the original payload is reassembled .
A multiframe mechanism has been implemented in H4 to compensate for differential delays of up to 256ms:
- Every individual VC has a H4 multiframe indicator (MFI) that denotes the virtual container they belong to
- The VC also traces its position X in the virtual container using he SEQ number which is carried in H4. The SEQ is repeated every 16 frames
The H4 POH byte is used for the virtual concatenation-specific sequence and multiframe indication
Lower Order Virtual Concatenation
Lower Order Virtual Concatenation LO-VCat. uses X times VC-11, VC12, or VC2 containers (VC-11/12/2-Xv, X = 1 … 64).
A VCG built with V11, VC12 or VC2 members provides a payload of X containers C11, C12 or C4; that is a capacity of X times 1600, 2176 or 6784 kbit/s. VCG members are transported individually through the network, therefore differential delays could occur between the individual components of a VCG, that will be compensated for at the destination node before reassembling the original continuous payload ).
A multiframe mechanism has been implemented in bit 2 of K4. It includes a sequence number (SQ) and the multiframe indicator (MFI), both enable the reordering of the VCG members. The MSSP destination node will wait until the last member arrives and then will compensate for delays up to 256ms. It is important to note that K4 is a multiframe itself, received every 500ms, then the whole multiframe sequence is repeated every 512 ms.
Testing VCAT
When installing or maintaining VCAT it is important to carry out a number of tests to verify not only the performance of the whole Virtual Concatenation, but also every single member of the VCG. For reassembling the original client data, all the members of the VCG must arrive at the far end, keeping the delay between the first and the last member of a VCG below 256 ms. A missing member prevents the
reconstruction of the payload, and if the problem persists causes a fault that would call for the reconfiguration of the VCAT pipe. Additionally, Jitter and Wander on individual paths can cause anomalies (errors) in the transport service.
BER, latency, and event tests should verify the capacity of the network to perform the service. The VCAT granularity capacity has to be checked as well, by adding/removing. To verify the reassembly operation it is necessary to use a tester with the capability to insert differential delays in individual members of a VC.
Single-mode fibre selection for Optical Communication System
This is collected from article written by Mr.Joe Botha
Looking for a single-mode (SM) fibre to light-up your multi-terabit per second system? Probably not, but let’s say you were – the smart money is on your well-intended fibre sales rep instinctively flogging you ITU-T G.652D fibre. Commonly referred to as standardSM fibre and also known as Non-Dispersion-Shifted Fibre (NDSF) – the oldest and most widely deployed fibre. Not a great choice, right? You bet. So for now, let’s resist the notion that you can do whatever-you-want using standardSM fibre. A variety of SM optical fibres with carefully optimised characteristics are available commercially: ITU-T G.652, 653, 654, 655, 656 or 657 compliant.
Designs of SM fibre have evolvedover the decadesand present-day optionswould have us deploy G.652D, G.655 or G.656 compliantfibres. Note that G.657A is essentially a more expressive version of G.652D,with a superiorbending loss performance and should you start feeling a little benevolent towards deploying it on a longish-haul – I can immediately confirm that this allows for a glimpse into the workingsof silliness. Dispersion Shifted Fibre (DSF) in accordance with G.653 has no chromatic dispersion at 1550 nm. However,they are limited to single-wavelength operation due to non-linear four-wave mixing. G.654 compliant fibres were developed specifically for underseaun-regenerated systems and since our focus is directed toward terrestrial applications – let’s leave it at that.
In the above context, the plan is to briefly weigh up G.652D,G.655 and G.656 compliantfibres against three parameters we calculate (before installation) and measure (after installation). I must just point-out that the fibre coefficients used are what one would expect from the not too shabby brands availabletoday.
Attenuation
|
Attenuation is the reduction or loss of opticalpower as light travels through an opticalfibre and is measured in decibelsper kilometer (dB/km). G.652D offers respectable attenuation coefficients, when compared with G.655 and G.656. It should be remembered, however, that even a meagre 0.01 dB/km attenuation
improvement would reduce a 100 km loss budget by a full dB – but let’s not quibble. No attenuation coefficients for G.655 and G.656 at 1310? It was not, as you may immediately assume, an oversight. Both G.655 and G.656 are optimizedto support long-haul systems and thereforecould not care less about runningat 1310 nm. A cut-offwavelength is the minimum wavelength at which a particular fibre will support SM transmission. At ≤ 1260 nm, G.652 D has the lowest cut-off wavelength – with the cut-off wavelengths for G.655 and G.656 sittingat ≤ 1480 nmand ≤1450 respectively – which explainswhy we have no attenuation coefficient for them at 1310 nm.
PMD
|
Polarization-mode dispersion (PMD) is an unwanted effect caused by asymmetrical properties in an opticalfibre that spreads the optical pulse of a signal. Slight asymmetry in an
optical fibre causes the polarized modes of the light pulse to travel at marginally different speeds, distorting the signal and is reportedin ps / √km, or “ps per root km”. Oddly enough,G.652 and co all possess decent-looking PMD coefficients. Now then, popping a 40-Gbpslaser onto my fibre up againstan ultra-low 0.04 ps / √km, my calculator reveals that the PMD coefficient admissible fibre length is 3,900 km and even at 0.1 ps / √km, a distance of 625 km is achievable.
So far so good? But wait, there’s more. PMD is particularly troublesome for both high data-rate-per-channel and high wavelength channel count systems, largely because of its random nature.Fibre manufacturer’s PMD specifications are accuratefor the fibre itself,but do not incorporate PMD incurred as a result of installation, which in many cases can be many orders of magnitude larger. It is hardly surprising that questionable installation practices are likely to cause imperfect fibre symmetry – the obvious implications are incomprehensible data streams and mental anguish.Moreover, PMD unlikechromatic dispersion (to be discussednext) is also affectedby environmental conditions, making it unpredictable and extremelydifficult to find ways to undo or offset its effect.
CD
|
CD (calledchromatic dispersion to emphasiseits wavelength-dependent nature) has zip-zero to do with the loss of light. It occurs because different wavelengths of light travel at differentspeeds. Thus, when the allowable
CD is exceeded – light pulses representing a bit-stream will be renderedillegible. It is expressedin ps/ (nm·km). At 2.5- Gbps CD is not an issue – however, lower data rates are seldom desirable. But at 10-Gbps,it is a big issue and the issue gets even bigger at 40-Gbps.
What’s troubling is G.652D’shigh CD coefficient – which one glumly has to concede, is very poor next to the competition. G.655 and G.656, variants of non-zerodispersion-shifted fibre (NZ-DSF), comprehensively address G.652D’s shortcomings. It should be noted that nowadays some optical fibre manufacturers don’t bother with distinguishing between G.655 and G.656 – referring to their offerings as G.655/6 compliant.
On the face of it, one might suggest that the answer to our CD problemis to send light along an optical fibre at a wavelength where the CD is zero (i.e. G.653).The result? It turns out that this approach creates more problems than it is likely to solve – by unacceptably amplifying non-linear four-wave mixing and limiting the fibre to single-wavelength operation- in other words, no DWDM. That, in fact, is why CD should not be completely lampooned. Research revealed that the fibre-friendly CD value l i e s in the range of 6-11 ps/nm·km. Therefore, and particularly for high-capacity transport, the best-suited fibre is one in which dispersion is kept within a tight range, being neither too high nor too low.
NZ-DSFs are available in both positive(+D) and negative (-D) varieties. Using NZ-DSF -D, a reverse behavior of the velocity per wavelength is createdand therefore, the effect of +CD can be cancelled out. I almost forgot to mention,by the way, that short wavelengths travel faster than long ones with +CD and longer wavelengths travel faster than short ones with -CD. New sophisticated modulation techniques such as dual-polarized quadrature phase-shift keying (DP-QPSK) using coherent detection, yields high quality CD compensation. However, because of the addedsignal processing time (versussimple on-off keying) they require,this can potentially be a poor choice from a latency perspective.
WDM multiplies capacity
The use of Dense Wavelength Division Multiplexing (DWDM) technology and 40-Gbps(and higher) transmission rates can push the information-carrying capacityof a single fibre to well over a terabit per second.One example is EASSy’s (a 4-fibre submarine cable serving sub-Saharan Africa) 4.72-Tbps capacity. Now then, should my maffs prove to be correct, 118 x 40-Gbpslasers (popped onto only 4-fibres!) should give us an aggregatecapacity of 4.72-Tbps?
Coarse Wavelength Division Multiplexing (CWDM) is a WDM technology that uses 4, 8, 16 or 18 wavelengths for transmission. CWDM is an economically sensible option, often used for short-haul applications on G.652D,where signal amplification is not necessary. CWDMs large 20 nm channelspacing allows for the use of cheaper, less powerfullasers that do not require cooling.
One of the most important considerations in the fibre selectionprocess is the fact that optical signals may need to be amplified along a route. Thanks in no small part (get the picture?)to CWDM’s large channelspacing – typicallyspanning over several spectralbands (1270 nm to 1610 nm) – its signals cannot be amplifiedusing Erbium Doped-Fibre Amplifiers (EDFAs).You see, EDFAs run only in the C and L bands (1520 nm to 1625 nm). WhereasCWDM breaks the optical spectrum up into large chunks – by contrast,DWDM slices it up finely, cramming4, 8, 16, 40, 80, or 160 wavelengths (on 2-fibres)into only the C- and L-bands (1520nm to 1625nm) – perfectfor the use of EDFAs. Each wavelength can without any obvious effort support a 40-Gbps laser and on top of this, 100-Gbpslasers are chompingat the bit to go mainstream.
Making the right choice
On the whole, it is hard not to concludethat the only thing that genuinelyseparates fibre types for high-bit-rate systems is CD. The 3-things – the only ones that I can think of – that is good about G.652D – is that it is affordable, cool for CWDM and perfect for short-haul environments.Top of the to-do lists of infrastructure providers pushing the boundaries of DWDM enabled ultra high-capacity transport over short, long or ultra long-haul networks – needlessto say, will be to source G.655/6compliant fibres. The cross-tabbelow indicates: Green for OK and oddly enough, Red for Not-OK
ITU-T Compliant | 10-Gbps CWDM | 40-Gbps CWDM | 10-Gbps DWDM | 40-Gbps DWDM | 100-Gbps DWDM |
G.652 | OK | NOK | NOK | NOK | NOK |
G.655/6 | OK | OK | OK | OK | OK |
Advantage of Coherent Optical Transmission System
This is a tricky question because 12 a.m. and 12 p.m. are ambiguous and should not be used.
To illustrate this, consider that “a.m.” and “p.m.” are abbreviations for “ante meridiem” and “post meridiem,” which mean “before noon” and “after noon,” respectively. Since noon is neither before noon nor after noon, a designation of either a.m. or p.m. is incorrect. Also, midnight is both twelve hours before noon and twelve hours after noon.
It is fair to say, however, that the shortest measurable duration after noon should be designated as p.m. For example, it would be applicable for a digital clock changing from 11:59:59 a.m. to 12:00:00 to indicate p.m. as soon as it the 12:00 appears, and not delay the display of the p.m. by a minute, or even a second. The same is true for midnight, but there is an added issue of which day midnight refers to (see below).
Hours of operation for a business or other references to a block of time should also follow this designation rule.
For example, a business might be open on Saturdays from 8 a.m. to noon or weekends from 3:30 p.m. until midnight.
Dawn occurs at the time that the geometric center of the Sun is 18° below the horizon in the morning. Respectively, dusk occurs when the geometric center of the Sun is 18° below the horizon in the evening. Twilight refers to the periods between the dawn and sunrise and sunset and dusk, where sunrise and sunset are defined as the exact times when the upper edge of the disc of the Sun is at the horizon. The hazy light during this period is an effect caused by the scattering of the sunlight in the upper atmosphere and reflecting towards Earth. It is very subjective as far as the time of day that twilight occurs, because it depends on the location and elevation of the observer, the time of year and local weather conditions.
In addition, twilight is divided into three durations based on the angle of the Sun below the horizon. Astronomical twilight is the period when the Sun is between 18° and 12° below the horizon. Nautical twilight is the period when the Sun is between 12° and 6° below the horizon, and civil twilight is the period when the Sun is between 6° below the horizon until sunrise. The same designations are used for periods of evening twilight.
Evolution to flexible grid WDM
FIGURE 1. The ITU’s fixed 50-GHz grid and its 100-GHz variant form the foundation for today’s optical networks. |
Current generation
FIGURE 2. Optical rates and their spectral efficiency. |
Superchannels
FIGURE 3. 400G modulation options and superchannels. |
FIGURE 4. Flexible grid channel plan. |
FIGURE 5. Transmitting 400G on legacy WDM networks. |
CDC ROADMs
Flexing network muscles
Optical modulation for High Baud Rate networks…. 40G/100G Speed Networks……
>> What is Higher-Order Modulation Method?
>> Why Do We Need Higher-Order Modulation Methods?
>> A Brief Tutorial on Higher-Order Modulation Methods
- Significant improvement in minimum OSNR by reducing the signal bandwidth
- Compatibility with the 50 GHz ITU-T channel spacing or at least with a spacing of 100 GHz
- Coexistence with 10 Gbps systems
- Transmission in networks that use ROADMs
- Scalable for 100 Gbps
>> Current Higher-Order Optical Modulation Methods Status
Four Wave Mixing (FWM) in WDM System..
>> Nonlinear Effects in High Power, High Bit Rate Fiber Optic Communication Systems
>> Basic Principles of Four-Wave Mixing
How to Test a Fiber Optic System with an OTDR (Optical Time Domain Reflectomer)
>> The Optical Time Domain Reflectometer (OTDR)
Why is BER difficult to simulate or calculate?
For a given design at a BER (such as 10-12 and a line rate of OC-3, or 155 Mbps), the network would have one error in approximately 10 days. It would take 1000 days to record a steady state BER value. That is why BER calculations are quite difficult. On the other hand, Q-factor analysis is comparatively easy. Q is often measured in dB. The next question is how to dynamically calculate Q. This is done from OSNR.
In other words, Q is somewhat proportional to the OSNR. Generally, noise calculations are performed by optical spectrum analyzers (OSAs) or sampling oscilloscopes, and these
measurements are carried over a particular measuring range of Bm. Typically, Bm is approximately 0.1 nm or 12.5 GHz for a given OSA. From Equation 4-12, showing Q in dB in
terms of OSNR, it can be understood that if B0 < Bc, then OSNR (dB )> Q (dB). For practical designs OSNR(dB) > Q(dB), by at least 1–2 dB. Typically, while designing a high bitrate system, the margin at the receiver is approximately 2 dB, such that Q is about 2 dB smaller than OSNR (dB).
Why Receiver Sensitivity is so important for optical module?
For Optical communication to happen, a receiver (essentially a photodetector, either a PIN or APD type) needs a minimum amount of power to distinguish the 0s and 1s from the raw input optical signal.
The minimum power requirement of the receiver is called the receiver sensitivity
The optical power at the receiver end has to be within the dynamic range of the receiver;
otherwise, it damages the receiver (if it exceeds the maximum value) or the receiver cannot
differentiate between 1s and 0s if the power level is less than the minimum value.
Optical Amplifiers……….The Payback!!
Optical amplifiers alleviate that problem by amplifying all the channels together completely in the optical domain; therefore, optical amplifiers can enhance the transmission distance. So, does that mean that optical amplifiers can increase the amplifying distance as much as they wants? Not really! Amplifiers come at a price and induct a trade off; they enhance the signal power level, but at the same time, they add their own complement of noise. This noise is amplified spontaneous emission (ASE).
The noise is random in nature, and it is accumulated at each amplification stage.
Amplifier noise is a severe problem in system design. A figure of merit here is the optical signal-to-noise ratio (OSNR) requirement of the system. The OSNR specifies the ratio of the net signal power to the net noise power. It is a ratio of two powers; therefore, if a signal and noise are both amplified, system OSNR still tells the quality of the signal by calculating this ratio. System design based on OSNR is an important fundamental design tool.
_____________________________________________________________________________________________
OSNR is not just limited to optical amplifier-based networks. Other active and passive devices can also add noise and create an OSNR-limited system design problem. Active devices such as lasers and amplifiers add noise. Passive devices such as taps and the fiber can add components of noise. In the calculation of system design, optical amplifier noise is considered the predominant source for OSNR penalty and degradation. That does not imply unimportance to other sources of OSNR penalty.
_____________________________________________________________________________________________
What,why and How of Pilot Tone in DWDM system?
There have been many efforts to use pilot tones (i.e., small sinusoidal components added to WDM signals) for the monitoring of WDM signals directly in the optical layer. For example, it has been reported that pilot tones could be used to monitor various optical parameters of WDM signals such as optical power, wavelength, and optical signal-to-noise ratio (OSNR), and so on .
The pilot-tone-based techniques could be used to monitor these parameters without the expensive demultiplexing filters (such as tunable optical filter and diffraction grating). Thus this technique could be extremely cost-effective. In addition, this technique is well suited for use in a dynamic WDM network, since the pilot tones are bound to follow their corresponding optical signals anywhere in the network. Thus the optical path of each WDM signal could be monitored simply by tracking its tone frequency
Although the pilot-tone-based monitoring technique has many advantages, it also has some limitations owing to the following problems.
First, the pilot tone could impose unwanted amplitude modulation on the data signal and degrade the receiver sensitivity .
Second, the performance of the pilot-tone-based monitoring technique could be deteriorated by ghost tones caused by cross-gain modulation (XGM) and stimulated Raman scattering (SRS) . These problems could be mitigated by using proper amplitudes and frequencies of pilot tones. However, for use in a long-haul network with a large number of channels, it may still be necessary to restrict the number ofWDM channels to be monitored at a time (by using an optical bandpass filter).
Operating Principle
Figure shows the operating principle of the pilot-tone-based monitoring technique .We assume that an optical signal is transmitted from node A to node C via node B. A small sinusoidal component (i.e., pilot tone) is added to the optical signal at node A. This pilot tone can be extracted at node B by use of a simple electronic circuit and can be used for monitoring various optical parameters such as optical power, wavelength, OSNR, and so on. Pilot tones can also be used to monitor the optical paths of WDM signals even in a dynamically reconfigurable network. This is because once the pilot tone is attached, it is bound to follow the optical signal throughout the network. Thus we can monitor the optical path of each WDM signal simply by tracking its corresponding tone frequency.
For practical applications, pilot tones should be added into and extracted from the optical
signal anywhere in the network.
dBm:- A mathematical Interpretation.
dBm definition
How to convert mW to dBm
How to convert dBm to mW
How to convert Watt to dBm
How to convert dBm to Watt
How to convert dBW to dBm
How to convert dBm to dBW
dBm to Watt, mW, dBW conversion table
Power (dBm) | Power (dBW) | Power (watt) | Power (mW) |
---|---|---|---|
-100 dBm | -130 dBW | 0.1 pW | 0.0000000001 mW |
-90 dBm | -120 dBW | 1 pW | 0.000000001 mW |
-80 dBm | -110 dBW | 10 pW | 0.00000001 mW |
-70 dBm | -100 dBW | 100 pW | 0.0000001 mW |
-60 dBm | -90 dBW | 1 nW | 0.000001 mW |
-50 dBm | -80 dBW | 10 nW | 0.00001 mW |
-40 dBm | -70 dBW | 100 nW | 0.0001 mW |
-30 dBm | -60 dBW | 1 μW | 0.001 mW |
-20 dBm | -50 dBW | 10 μW | 0.01 mW |
-10 dBm | -40 dBW | 100 μW | 0.1 mW |
-1 dBm | -31 dBW | 794 μW | 0.794 mW |
0 dBm | -30 dBW | 1.000 mW | 1.000 mW |
1 dBm | -29 dBW | 1.259 mW | 1.259 mW |
10 dBm | -20 dBW | 10 mW | 10 mW |
20 dBm | -10 dBW | 100 mW | 100 mW |
30 dBm | 0 dBW | 1 W | 1000 mW |
40 dBm | 10 dBW | 10 W | 10000 mW |
50 dBm | 20 dBW | 100 W | 100000 mW |
60 dBm | 30 dBW | 1 kW | 1000000 mW |
70 dBm | 40 dBW | 10 kW | 10000000 mW |
80 dBm | 50 dBW | 100 kW | 100000000 mW |
90 dBm | 60 dBW | 1 MW | 1000000000 mW |
100 dBm | 70 dBW | 10 MW | 10000000000 mW |
Introduction To The dB
Describing Power | ||||||||||||||||||||||||||||||||||||||||||||||||
Signal stages are cascaded, so powers are multiplied by gain or loss. This yields a lot of multiplications. This suggests the need for a logarithmic representation of power.
A logarithmic scale is used to
|
||||||||||||||||||||||||||||||||||||||||||||||||
Logarithms | ||||||||||||||||||||||||||||||||||||||||||||||||
Log(x) = power to which base must be raised to give x. The base is chosen to be 10.
Some example logarithm values:
|
||||||||||||||||||||||||||||||||||||||||||||||||
The deciBel | ||||||||||||||||||||||||||||||||||||||||||||||||
Represent gains or attenuations logarithmically (base 10) (the Bel)
But to make numbers more convenient, scale by a factor of 10 (the deciBel or dB)
Examples:
Since Log(A x B) = Log(A) + Log(B) we can add gains and losses.
PR = PT + 20 – 1 + 30 – 2 – 204 + 30 -1 + 60 = PT – 68 dB
For converting from a power ratio to dB, first work out powers of 10, e.g:
Then note the smaller factors:
Examples of converting from dB to a Ratio (or more generally, ratio = 10dB/10):
|
||||||||||||||||||||||||||||||||||||||||||||||||
Applying dB to Other Units | ||||||||||||||||||||||||||||||||||||||||||||||||
By default, dB is a power ratio. But it can be other things, for example, dB banana = dB relative to 1 banana.
dBW = dB relative to 1 watt, so:
Bandwidth in Hz can be expressed in dB-Hz
Similarly, Noise Temperature:
By default, with dBs we are dealing with power.
Thus a change in power (e.g. due to amplification) can be represented by:
TIP: Take care with “Voltage gain in dB” which is usually a power gain, i.e 20Log(V2 / V1)
|
||||||||||||||||||||||||||||||||||||||||||||||||
How Big Is A dB? | ||||||||||||||||||||||||||||||||||||||||||||||||
Examples of BER vs. Eb/No in dB:
|
What is a 30 dB gain means???
Puzzled..Earlier I use to be the same but its really simple to understand.It means:-
For every input PHOTON there will be 1000 PHOTON’s at output. That’s what Gain is..
X dB is 10x/10
You can verify the same with above formula…Make things Simpler.