MapYourTech has always been about YOUR tech journey, YOUR questions, YOUR thoughts, and most importantly, YOUR growth.
It’s a space where we "Map YOUR Tech" experiences and empower YOUR ambitions.
To further enhance YOUR experience, we are working on delivering a professional, fully customized platform tailored to YOUR needs and expectations.
Thank you for the love and support over the years. It has always motivated us to write more, share practical industry insights, and bring content that empowers and inspires YOU to excel in YOUR career.
We truly believe in our tagline:
“Share, explore, and inspire with the tech inside YOU!”
Let us know what YOU would like to see next! Share YOUR thoughts and help us deliver content that matters most to YOU.
The main advantages and drawbacks of EDFAs are as follows.
Advantages
Commercially available in C band (1,530 to 1,565 nm) and L band (1,560 to 1,605) and up to 84-nm range at the laboratory stage.
Excellent coupling: The amplifier medium is an SM fiber;
Insensitivity to light polarization state;
Low sensitivity to temperature;
High gain: > 30 dB with gain flatness < ±0.8 dB and < ±0.5 dB in C and L band, respectively, in the scientific literature and in the manufacturer documentation
Low noise figure: 4.5 to 6 dB
No distortion at high bit rates;
Simultaneous amplification of wavelength division multiplexed signals;
Immunity to crosstalk among wavelength multiplexed channels (to a large extent)
Drawbacks
Pump laser necessary;
Difficult to integrate with other components;
Need to use a gain equalizer for multistage amplification;
Dropping channels can give rise to errors in surviving channels:dynamic control of amplifiers is necessary.
A Lower Order-ODU or LO-ODU is an ODUk, whose OPUk transports a inside mapped client.
An Higher Order-ODU or HO-ODU is an ODUk, whose OPUk transports an inside multiplexed ODUj.
LO-ODU and HO-ODU have the same structure but with different clients.
The LO-ODU is either mapped into the associated OTUk or multiplexed into an HO-ODU.
The HO-ODU is mapped into the associated OTUk.
Please notice that, HO-ODUj multiplexed into HO-ODUk is an undesired hierarchy in one domain.
The ITU standards define a “suspect internal flag” which should indicate if the data contained within a register is ‘suspect’ (conditions defined in Q.822). This is more frequently referred to as the IDF (Invalid Data Flag).
PM is bounded by strict data collection rules as defined in standards. When the collection of PM parameters is affected then PM system labels the collection of data as suspect with an Invalid Data Flag (IDF). For the sake of identification; some unique flag is shown next to corresponding counter.
The purpose of the flag is to indicate when the data in the PM bin may not be complete or may have been affected such that the data is not completely reliable. The IDF does not mean the software is contingent.
Some of the common reasons for setting the IDF include:
a collection time period that does not start within +/- 1 second of the nominal collection window start time.
a time interval that is inaccurate by +/- 10 seconds (or more)
the current time period changes by +/- 10 seconds (or more)
a restart (System Controller restarts will wipe out all history data and cause time fluctuations at line/client module; a module restart will wipe out the current counts)
a PM bin is cleared manually
a hardware failure prevents PM from properly collecting a full period of PM data (PM clock failure)
a protection switch has caused a change of payload on a protection channel.
a payload reconfiguration has occurred (similar to above but not restricted to protection switches).
an System Controller archive failure has occurred, preventing history data from being collected from the line/client cards
protection mode is switched from non-revertive to revertive (affects PSD only)
a protection switch clear indication is received when no raise was indicated
laser device failure (affects physical PMs)
loss of signal (affects receive – OPRx, IQ – physical PMs only)
Control Plane is booted less than 15 min period for 15-min interval and less than 24 hour period for 24-hour interval.
Suspect interval is determined by comparing nSamples to nTotalSamples on a counter PM. If nSamples is not equal to nTotalSamples then this period can be marked as suspect.
If any 15 minute is marked as suspect or reporting for that day interval is not started at midnight then it should flag that 24 Hr as suspect.
Some of the common examples are:
Interface type is changed to another compatible interface (10G SR interface replaced by 10G DWDM interface),
Line type is changed from SONET to SDH,
Equipment failures are detected and those failures inhibit the accumulation of PM.
Transitions to/from the ‘locked’ state.
The System shall mark a given accumulation period invalid when the facility object is created or deleted during the interval.
A short discussion on 980nm and 1480nm pump based EDFA
Introduction
The 980nm pump needs three energy level for radiation while 1480nm pumps can excite the ions directly to the metastable level .
(a) Energy level scheme of ground and first two excited states of Er ions in a silica matrix. The sublevel splitting and the lengths of arrows representing absorption and emission transitions are not drawn to scale. In the case of the 4 I11/2 state, s is the lifetime for nonradiative decay to the I13/2 first excited state and ssp is the spontaneous lifetime of the 4 I13/2 first excited state. (b) Absorption coefficient, a, and emission coefficient, g*, spectra for a typical aluminum co-doped EDF.
The most important feature of the level scheme is that the transition energy between the I15/2 ground state and the I13/2 first excited state corresponds to photon wavelengths (approximately 1530 to 1560 nm) for which the attenuation in silica fibers is lowest. Amplification is achieved by creating an inversion by pumping atoms into the first excited state, typically using either 980 nm or 1480 nm diode lasers. Because of the superior noise figure they provide and their superior wall plug efficiency, most EDFAs are built using 980 nm pump diodes. 1480 nm pump diodes are still often used in L-band EDFAs although here, too, 980 nm pumps are becoming more widely used.
Though pumping with 1480 nm is used and has an optical power conversion efficiency which is higher than that for 980 nm pumping, the latter is preferred because of the following advantages it has over 1480 nm pumping.
It provides a wider separation between the laser wavelength and pump wavelength.
980 nm pumping gives less noise than 1480nm.
Unlike 1480 nm pumping, 980 nm pumping cannot stimulate back transition to the ground state.
980 nm pumping also gives a higher signal gain, the maximum gain coefficient being 11 dB/mW against 6.3 dB/mW for the 1.48
The reason for better performance of 980 nm pumping over the 1.48 m pumping is related to the fact that the former has a narrower absorption spectrum.
The inversion factor almost becomes 1 in case of 980 nm pumping whereas for 1480 nm pumping the best one gets is about 1.6.
Quantum mechanics puts a lower limit of 3 dB to the optical noise figure at high optical gain. 980 nm pimping provides a value of 3.1 dB, close to the quantum limit whereas 1.48 pumping gives a value of 4.2 dB.
1480nm pump needs more electrical power compare to 980nm.
Application
The 980 nm pumps EDFA’s are widely used in terrestrial systems while 1480nm pumps are used as Remote Optically Pumped Amplifiers (ROPA) in subsea links where it is difficult to put amplifiers.For submarine systems, remote pumping can be used in order not to have to electrically feed the amplifiers and remove electronic parts.Nowadays ,this is used in pumping up to 200km.
The erbium-doped fiber can be activated by a pump wavelength of 980 or 1480 nm but only the second one is used in repeaterless systems due to the lower fiber loss at 1.48 mm with respect to the loss at 0.98 mm. This allows the distance between the terminal and the remote amplifier to be increased.
In a typical configuration, the ROPA is comprised of a simple short length of erbium doped fiber in the transmission line placed a few tens of kilometers before a shore terminal or a conventional in-line EDFA. The remote EDF is backward pumped by a 1480 nm laser, from the terminal or in-line EDFA, thus providing signal gain
Vendors
Following are the vendors that manufactures 980nm and 1480nm EDFAs
As we know that either homodyne or heterodyne detection can be used to convert the received optical signal into an electrical form. In the case of homodyne detection, the optical signal is demodulated directly to the baseband. Although simple in concept, homodyne detection is difficult to implement in practice, as it requires a local oscillator whose frequency matches the carrier frequency exactly and whose phase is locked to the incoming signal. Such a demodulation scheme is called synchronous and is essential for homodyne detection. Although optical phase-locked loops have been developed for this purpose, their use is complicated in practice.
Heterodyne detection simplifies the receiver design, as neither optical phase locking nor frequency matching of the local oscillator is required. However, the electrical signal oscillates rapidly at microwave frequencies and must be demodulated from the IF bandto the baseband using techniques similar to those developed for microwave communication systems. Demodulation can be carried out either synchronously or asynchronously. Asynchronous demodulation is also called incoherent in the radio communication literature. In the optical communication literature, the term coherent detection is used in a wider sense.
A lightwave system is called coherent as long as it uses a local oscillator irrespective of the demodulation technique used to convert the IF signal to baseband frequencies.
*In case of homodyne coherent-detection technique, the local-oscillator frequency is selected to coincide with the signal-carrier frequency.
*In case of heterodyne detection the local-oscillator frequency is chosen to differ from the signal-carrier frequency.
A coherent light consists of two light waves that:
1) Have the same oscillation direction.
2) Have the same oscillation frequency.
3) Have the same phase or maintain a constant phase relationship with each other. Two coherent light waves produce interference within the area where they meet.
Principles of Coherent Communication
Coherent communication technologies mainly include coherent modulation and coherent detection.
Coherent modulation uses the signals that are propagated to change the frequencies, phases, and amplitudes of optical carriers. (Intensity modulation only changes the strength of light.)
Modulation detection mixes the laser light generated by a local oscillator (LO) with the incoming signal light using an optical hybrid to produce an IF signal that maintains the constant frequency, phase, and amplitude relationships with the signal light.
The motivation behind using the coherent communication techniques is two-fold.
First, the receiver sensitivity can be improved by up to 20 dB compared with that of IM/DD systems.
Second, the use of coherent detection may allow a more efficient use of fiber bandwidth by increasing the spectral efficiency of WDM systems
In a non-coherent WDM system, each optical channel on the line side uses only one binary channel to carry service information. The service transmission rate on each optical channel is called bit ratewhile the binary channel rate is called baud rate. In this sense, the baud rate was equal to the bit rate. The spectral width of an optical signal is determined by the baud rate. Specifically, the spectral width is linearly proportional to the baud rate, which means a higher baud rate generates a larger spectral width.
Baud (pronounced as /bɔ:d/ and abbreviated as “Bd”) is the unit for representing the data communication speed. It indicates the signal changes occurring in every second on a device, for example, a modulator-demodulator (modem). During encoding, one baud (namely, the signal change) actually represents two or more bits. In the current high-speed modulation techniques, each change in a carrier can transmit multiple bits, which makes the baud rate different from the transmission speed.
In practice, the spectral width of the optical signal cannot be larger than the frequency spacing between WDM channels; otherwise, the optical spectrums of the neighboring WDM channels will overlap, causing interference among data streams on different WDM channels and thus generating bit errors and a system penalty.
For example, the spectral width of a 100G BPSK/DPSK signal is about 50 GHz, which means a common 40G BPSK/DPSK modulator is not suitable for a 50 GHz channel spaced 100G system because it will cause a high crosstalk penalty. When the baud rate reaches 100 Gbaud/s, the spectral width of the BPSK/DPSK signal is greater than 50 GHz. Thus, it is impossible to achieve 50 GHz channel spacing in a 100G BPSK/DPSK transmission system.
(This is one reason that BPSK cannot be used in a 100G coherent system. The other reason is that high-speed ADC devices are costly.)
A 100G coherent system must employ new technology. The system must employ more advanced multiplexing technologies so that an optical channel contains multiple binary channels. This reduces the baud rate while keeping the line bit rate unchanged, ensuring that the spectral width is less than 50 GHz even after the line rate is increased to 100 Gbit/s. These multiplexing technologies include quadrature phase shift keying (QPSK) modulation and polarization division multiplexing (PDM).
For coherent signals with wide optical spectrum, the traditional scanning method using an OSA or inband polarization method (EXFO) cannot correctly measure system OSNR. Therefore, use the integral method to measure OSNR of coherent signals.
Perform the following operations to measure OSNR using the integral method:
1.Position the central frequency of the wavelength under test in the middle of the screen of an OSA.
2.Select an appropriate bandwidth span for integration (for 40G/100G coherent signals, select 0.4 nm).
3.Read the sum of signal power and noise power within the specified bandwidth. On the OSA, enable the Trace Integ function and read the integral value. As shown in Figure 2, the integral optical power (P + N) is 9.68 uW.
4.Read the integral noise power within the specified bandwidth. Disable the related laser before testing the integral noise power. Obtain the integral noise power N within the signal bandwidth specified in step 2. The integral noise power (N) is 29.58 nW.
5.Calculate the integral noise power (n) within the reference noise bandwidth. Generally, the reference noise bandwidth is 0.1 nm. Read the integral power of central frequency within the bandwidth of 0.1 nm. In this example, the integral noise power within the reference noise bandwidth is 7.395 nW.
6.Calculate OSNR. OSNR = 10 x lg{[(P + N) – N]/n}
In this example, OSNR = 10 x log[(9.68 – 0.02958)/0.007395] = 31.156 dB
We follow integral method because Direct OSNR Scanning Cannot Ensure Accuracy because of the following reason:
A 40G/100G signal has a larger spectral width than a 10G signal. As a result, the signal spectrums of adjacent channels overlap each other. This brings difficulties in testing the OSNR using the traditional OSA method, which is implemented based on the interpolation of inter-channel noise that is equivalent to in-band noise. Inter-channel noise power contains not only the ASE noise power but also the signal crosstalk power. Therefore, the OSNR obtained using the traditional OSA method is less than the actual OSNR. The figure below shows the signal spectrums in hybrid transmission of 40G and 10G signals with 50 GHz channel spacing. As shown in the figure, a severe spectrum overlap has occurred and the tested ASE power is greater than it should be .As ROADM and OEQ technologies become mature and are widely used, the use of filter devices will impair the noise spectrum. As shown in the following figure, the noise power between channels decreases remarkably after signals traverse a filter. As a result, the OSNR obtained using the traditional OSA method is greater than the actual OSNR..
The addition of FEC overhead at the transmit end and FEC decoding at the receive end cause latency in signal transmission.
Relationship between latency and FEC scheme types
To improve the correction capability, more powerful and complex FEC codes must be used. However, the more complex the FEC codes are, the more time FEC decoding will take. As a result, the latency is increased.
For a non-coherent 40G signal, AFEC typically introduces a larger latency (60~120 us) than FEC, which introduces a latency of 30 us.
Latency of a non-coherent 40G signal is related to the ODUk mapping mode.
Latency is also related to the amount of overhead. More overhead means that FEC decoding will take more time.
Latency introduced by 100G SD-FEC with 20% overhead > Latency introduced by 100G HD-FEC with 7% overhead
In addition, the latency is subject to the signal coding rate. With the overhead unchanged, there will be less latency as the signal rate increases.
Latency introduced by 100G HD-FEC < Latency introduced by 40G AFEC
Latency specifications of OTN equipment
Data and storage services are sensitive to latency while OTN services are not. Currently, no international standards have defined how much latency that OTN signals must satisfy. Vendor equipment supports latency testing for different service rates, FEC schemes, and mapping modes. You can recommend a latency value for customers, but latency should not be an acceptance test item. You cannot make a commitment to a latency specification.
Why does coherent detection introduce pre-FEC bit errors?
The DSP algorithm at the receive end detects and analyzes the phase and amplitude of a received signal in real time to calculate and compensate the distortion of the signal caused by factors such as CD, PMD, and nonlinearity. Because CD, PMD, and nonlinearity vary with time, the compensation amount calculated by the DSP algorithm is not so accurate and thus pre-FEC bit errors occur.
In practice, the transmission distance should be extended as long as possible. The nonlinearity in a long-haul system cause large changes in signal phases. The DSP algorithm is thus required to lock the phase of each signal at a large tracking step, which enables fast locking of great phase changes but has poorer compensation accuracy. As a result, background noise is introduced and further bit errors occur (error floor poorer than 1.0E-6). The background noise is the major factor that causes pre-FEC bit errors in back-to-back OSNR measurement and short-reach transmission. It is negligible compared with the noise introduced in long-haul transmission. Therefore, the large-step tracking method remarkably improves long-haul transmission performance without affecting short-reach transmission performance.
The DSP algorithm is independent of optical-layer configurations, such as back-to-back configurations, the transmission distance, and the number of spans. Therefore, in a back-to-back configuration, the DSP algorithm also has a compensation error and introduces bit errors.
Important Notes on BER for Coherent
For 100G coherent optical modules, the pre-FEC BERs may differ when different 100G boards are connected in back-to-back manner or when WDM-side external loopbacks are performed because of differences in the optical modules. Similarly, after signals traverse spans with good OSNRs, the BERs of different 100G boards may also differ. However, the use of advanced DSP algorithm in the 100G boards ensure that all the 100G boards have the same FEC correction capability.
As shown in the figure below, the red and blue lines represent the test data of two different 100G boards.
Basic understanding on Tap ratio for Splitter/Coupler
Fiber splitters/couplers divide optical power from one common port to two or more split ports and combine all optical power from the split ports to one common port (1 × N coupler). They operate across the entire band or bands such as C, L, or O bands. The three port 1 × 2 tap is a splitter commonly used to access a small amount of signal power in a live fiber span for measurement or OSA analysis. Splitters are referred to by their splitting ratio, which is the power output of an individual split port divided by the total power output of all split ports. Popular splitting ratios are shown in Table below; however, others are available. Equation below can be used to estimate the splitter insertion loss for a typical split port. Excess splitter loss adds to the port’s power division loss and is lost signal power due to the splitter properties. It typically varies between 0.1 to 2 dB, refer to manufacturer’s specifications for accurate values. It should be noted that splitter function is symmetrical.
where IL = splitter insertion loss for the split port, dB
Pi = optical output power for single split port, mW
PT = total optical power output for all split ports, mW
SR = splitting ratio for the split port, %
Γe = splitter excess loss (typical range 0.1 to 2 dB), dB
Common splitter applications include
• Permanent installation in a fiber link as a tap with 2%|98% splitting ratio. This provides for access to live fiber signal power and OSA spectrum measurement without affecting fiber traffic. Commonly installed in DWDM amplifier systems.
• Video and CATV networks to distribute signals.
• Passive optical networks (PON).
• Fiber protection systems.
Example with calculation:
If a 0 dBm signal is launched into the common port of a 25% |75% splitter, then the two split ports, output power will be −6.2 and −1.5 dBm. However, if a 0 dBm signal is launched into the 25% split port, then the common port output power will be −6.2 dBm.
Calculation.
Launch power=0 dBm =1mW
Tap is 25%|75%
so equivalent mW power which is linear will be
0.250mW|0.750mW
and after converting them ,dBm value will be
-6.02dBm| -1.24dBm
Some of the common split ratios and their equivalent Optical Power is available below for reference.
Why it is Good to have ORL >30dB for general fiber links?
Optical return loss (ORL) is the logarithmic ratio of the launch (incident) power divided by the total reflected power seen at the launch point. The total reflected power is the total accumulated reflected optical power measured at the launch caused by fiber Rayleigh scattering and Fresnel reflections. Rayleigh scattering is the scattering of light along the entire length of the fiber, caused by elastic collisions between the light wave and fiber molecules. This results in some of the light to be reflected back to the source. Rayleigh scattering is intrinsic to the fiber and therefore cannot be eliminated.
Fresnel reflections occur in the light path where there is an abrupt change in the refractive index such as at connections and splices. The further away a reflective event is from the fiber launch point the less
it contributes to the total reflected power. Therefore, fiber connections and splices closest to the laser contribute the most to the ORL. ORL is always expressed as a positive decibel. The higher the ORL the lower the reflected power.
where ORL = optical return loss, dB
PR = total reflected power seen at the launch point, mW
Pi = launch or incident power, mW
where S = backscattering capture coefficient, approximately 0.0015
for standard fiber at 1550 nm
L= fiber length, km
α= attenuation coefficient, 1/km
See calculation for ORL for SMF at 1550nm.
Assume fiber attenuation is 0.22 dB/km at 1550 nm,
S = 0.0015 with a nonreflective end.
L=20Km
After calculation using above generic values; ORL will come as ~30 dB.
ITU-T G.959.1 recommends a minimum ORL of 24 dB for 2.5, 10, and 40 Gbps fiber links.
Q is the quality of a communication signal and is related to BER. A lower BER gives a higher Q and thus a higher Q gives better performance. Q is primarily used for translating relatively large BER differences into manageable values.
Pre-FEC signal fail and Pre-FEC signal degrade thresholds are provisionable in units of dBQ so that the user does not need to worry about FEC scheme when determining what value to set the thresholds to as the software will automatically convert the dBQ values to FEC corrections per time interval based on FEC scheme and data rate.
The Q-Factor, is in fact a metric to identify the attenuation in the receiving signal and determine a potential LOS and it is an estimate of the Optical-Signal-to-Noise-Ratio (OSNR) at the optical receiver. As attenuation in the receiving signal increases, the dBQ value drops and vice-versa. Hence a drop in the dBQ value can mean that there is an increase in the Pre FEC BER, and a possible LOS could occur if the problem is not corrected in time.
The Quality of an Optical Rx signal can be measured by determining the number of “bad” bits in a block of received data. The bad bits in each block of received data are removed and replaced with “good” zero’s or one’s such that the network path data can still be properly switched and passed on to its destination. This strategy is referred to as Forward Error Correction (FEC) and prevents a complete loss of traffic due to small un-important data-loss that can be re-sent again later on. The process by which the “bad” bits are replaced with the “good” bits in an Rx data block is known as Mapping. The Pre FEC are the FEC Counts of “bad” bits before the Mapper and the FEC Counts (or Post FEC Counts) are those after the Mapper.
The number of Pre FEC Counts for a given period of time can represent the status of the Optical Rx network signal; An increase in the Pre FEC count means that there is an increase in the number of “bad” bits that need to be replaced by the Mapper. Hence a change in rate of the Pre FEC Count (Bit Erro Rate – BER) can identify a potential problem upstream in the network. At some point the Pre FEC Count will be too high as there will be too many “bad” bits in the incoming data block for the Mapper to replace … this will then mean a Loss of Signal (LOS).
As the normal number of Pre FEC Counts are high (i.e. 1.35E-3 to 6.11E-16) and constantly fluctuate, it can be difficult for an network operator to determine whether there is a potential problem in the network. Hence a dBQ value, known as the Q-Factor, is used as a measure of the Quality of the receiving optical signal. It should be consistent with the Pre FEC Count Bit Error Rate (BER).
The standards define the Q-Factor as Q = 10log[(X1 – X0)/(N1 – N0)] where Xj and Nj are the mean and standard deviation of the received mark-bit (j=1) and space-bit (j=0) ……………. In some cases Q = 20log[(X1 – X0)/(N1 – N0)]
For example, the linear Q range 3 to 8 covers the BER range of 1.35E-3 to 6.11E-16.
Nortel defines dBQ as 10xlog10(Q/Qref) where Qref is the pre-FEC raw optical Q, which gives a BER of 1E-15 post-FEC assuming a particular error distribution. Some organizations definedBQ as 20xlog10(Q/Qref), so care must be taken when comparing dBQ values from different sources.
The dBQ figure represents the dBQ of margin from the following pre-FEC BERs (which are equivalent to a post-FEC BER of 1E-15). The equivalent linear Q value for these BERs are Qref in the above formula.
Pre-FEC signal degrade can be used the same way a car has an “oil light” in that it states that there is still margin left but you are closer to the fail point than expected so action should be taken.
Bit error rate, BER is a key parameter that is used in assessing systems that transmit digital data from one location to another.
BER can be affected by a number of factors. By manipulating the variables that can be controlled it is possible to optimise a system to provide the performance levels that are required. This is normally undertaken in the design stages of a data transmission system so that the performance parameters can be adjusted at the initial design concept stages.
Interference: The interference levels present in a system are generally set by external factors and cannot be changed by the system design. However it is possible to set the bandwidth of the system. By reducing the bandwidth the level of interference can be reduced. However reducing the bandwidth limits the data throughput that can be achieved.
Increase transmitter power: It is also possible to increase the power level of the system so that the power per bit is increased. This has to be balanced against factors including the interference levels to other users and the impact of increasing the power output on the size of the power amplifier and overall power consumption and battery life, etc.
Lower order modulation: Lower order modulation schemes can be used, but this is at the expense of data throughput.
Reduce bandwidth: Another approach that can be adopted to reduce the bit error rate is to reduce the bandwidth. Lower levels of noise will be received and therefore the signal to noise ratio will improve. Again this results in a reduction of the data throughput attainable.
It is necessary to balance all the available factors to achieve a satisfactory bit error rate. Normally it is not possible to achieve all the requirements and some trade-offs are required. However, even with a bit error rate below what is ideally required, further trade-offs can be made in terms of the levels of error correction that are introduced into the data being transmitted. Although more redundant data has to be sent with higher levels of error correction, this can help mask the effects of any bit errors that occur, thereby improving the overall bit error rate.
A Raman amplifier is a well-known amplifier configuration. This amplifier uses conventional fiber (rather doped fibers), which may be co-or counter-pumped to provide amplification over a wavelength range which is a function of the pump wavelength. The Raman amplifier relies upon forward or backward stimulated Raman scattering. Typically, the pump source is selected to have a wavelength of around 100 nm below the wavelength over which amplification is required.
Keynotes of using Raman Amplifiers:
Its usage improves the overall gain characteristics of high capacity optical wavelength division multiplexed (WDM) communications systems.
Its usage do not attenuate signals outside the wavelength range over which amplification takes place.
it is usual to provide a separate pump sources for each wavelength required in form of Raman fibre lasers or semiconductor pumps.
Multiple lasers increases the overall costs of Raman amplifiers.
Raman Amplifiers are very sensitive to input power so they are always used with EDFA in cascaded fashion.( a small change at input will result in high output power change and thus subsequent components may suffer)
Power consumption is very high as multiple lasers are used.
Always keep output shut-off while integrating Raman in a link as high power lasers are dangerous to personnel too.
Raman ampliers are typically pumped using unpolarized pump beams.i.e (both the pump and the signal propagate in the fundamental mode, supports two orthogonal polarization states in each mode. Thus, even though the fiber is single mode, the pump and signal may propagate in orthogonal polarizations.)
Distributed (Raman) Gain Improved Transmission Systems
Figure 1 shows a conventional transmission system using erbium-doped fiber amplifiers (EDFAs) to amplify the signal. The signal power in the transmission line is shown; at the output of the EDFA the signal power is high. However, nonlinear effects limit the amount of amplification of the signal. The signal is attenuated along the transmission line. In addition, the minimum signal level limits the Optical Signal to Noise Ratio (OSNR) of the transmission. So the transmission distance between each amplifier point is limited by nonlinear effects at the high signal level right after amplification and the minimum allowable OSNR just before amplification.
Figure 2. Amplification scheme using distributed Raman amplification together with lumped EDFAs
By comparison, Figure 2 shows a scenario where distributed Raman amplification is used. In this hybrid version with backward propagating pumps and EDFAs, the signal power level evolves as shown by the red curves. At the end of the link, the signal is amplified by the Raman pump, and the OSNR is thereby improved. The input power level can also be lowered, as Raman amplification keeps the signal from the noise limit. The lower input power mitigates the non-linearities in the system. Forward propagating pumps or a combination of both forward and backward propagating pumps may be used. Installing transmission fibers that have been designed and optimized to take full advantage of the Raman technology allows system designs with higher capacity and lower cost.
Characteristics of Raman-Optimized Fibers
Fibers optimized for Raman amplification offer the following characteristics:
The ITU approved DWDM band extends from 1528.77 nm to 1563.86 nm, and divides into the red band and the blue band.
The red band encompasses the longer wavelengths of 1546.12 nm and higher.
The blue band wavelengths fall below 1546.12 nm.
This division has a practical value because useful gain region of the lowest cast EDFAs corresponds to the red band wavelengths. Thus, if a system only requires a limited number of DWDM wavelengths using the red band wavelength yields the lowest overall system cost.
Regarding Red and Blue convention.
Its just a convention which is prevalent since electromagnetic spectrum is in study either it is Doppler effect or Rayleigh Scattering and later on it was taken into consideration in optics or photonics world.
(Taken from Wikipedia: Its more of talking light spectrum VIBGYOR where red-shift and blue-shift is discussed and “red-shift “ happens when light or other electromagnetic radiation from an object is increased in wavelength, or shifted to the red end of the spectrum.In general, whether or not the radiation is within the visible spectrum, “redder” means an increase in wavelength – equivalent to a lower frequency and a lower photon energy,A blueshift is any decrease in wavelength, with a corresponding increase in frequency, of an electromagnetic wave; the opposite effect is referred to as redshift. In visible light, this shifts the colour from the red end of the spectrum to the blue end.)
The ITU approved DWDM C-band extends from 1528.77 nm to 1563.86 nm, and divides into the red band and the blue band.
The red band encompasses the longer wavelengths of 1546.12 nm and higher.
The blue band wavelengths fall below 1546.12 nm.
Example to make it more clear:-
C Band: 1528.77 nm to 1563.86 nm
C-Blue 1529.44~1543.84
=====guardband====
C-red 1547.60~1561.53
L Band: 1565nm-1625nm
L-Blue: 1570nm-1584nm
=====guardband====
L-Red: 1589nm-1603nm
So, this blue and red shift is for characterisation behaviour study and to classify filters as well .
As defined in G.709 an ODUk container consist of an OPUk (Optical Payload Unit) plus a specific ODUk Overhead (OH). OPUk OH information is added to the OPUk information payload to create anOPUk. It includes information to support the adaptation of client signals.Within the OPUk overhead there is the payload structure identifier (PSI) that includes the payload type (PT). The payload type (PT) is used to indicate the composition of the OPUk signal.
When an ODUj signal is multiplexed into an ODUk, the ODUj signal is first extended with frame alignment overhead and then mapped into an Optical channel Data Tributary Unit (ODTU). Two different types of ODTU are defined in G.709:
– ODTUjk ((j,k) = {(0,1), (1,2), (1,3), (2,3)}; ODTU01,ODTU12,ODTU13 and ODTU23) in which an ODUj signal is mapped via the asynchronous mapping procedure (AMP), defined in clause 19.5 of G.709.
– ODTUk.ts ((k,ts) = (2,1..8), (3,1..32), (4,1..80)) in which a lower order ODU (ODU0, ODU1, ODU2, ODU2e, ODU3, ODUflex) signal is mapped via the generic mapping procedure (GMP), defined in clause 19.6 of G.709.
When PT is assuming value 20 or 21,together with OPUk type (K=1,2,3,4), it is used to discriminate two different ODU multiplex structure ODTUGx :
– Value 20: supporting ODTUjk only,
– Value 21: supporting ODTUk.ts or ODTUk.ts and ODTUjk.
The discrimination is needed for OPUk with K =2 or 3, since OPU2 and OPU3 are able to support both the different ODU multiplex structures.For OPU4 and OPU1, only one type of ODTUG is supported: ODTUG4 with PT=21 and ODTUG1 with PT=20.The relationship between PT and TS granularity, is in the fact that the twodifferent ODTUGk discriminated by PT and OPUk are characterized by two different TS granularities of the relatedOPUk, the former at 2.5 Gbps, the latter at 1.25Gbps.
To detect a failure that occurs at the source (e.g., laser failure) or the transmission facility (e.g., fiber cut), all incoming SONET signals are monitored for loss of physical-layer signal (optical or electrical). The detection of an LOS defect must take place within a reasonably short period of time for timely restoration of the transported payloads.
A SONET NE shall monitor all incoming SONET signals (before descrambling) for an “all-zeros patterns,” where an all-zeros pattern corresponds to no light pulses for OC-N optical interfaces and no voltage transitions for STS-1 and STS-3 electrical interfaces. An LOS defect shall be detected when an all-zeros pattern on the incoming SONET signal lasts 100 μs or longer. If an all-zeros pattern lasts 2.3 μs or less, an LOS defect shall not be detected. The treatment of all-zeros patterns lasting between 2.3 μs and 100 μs for the purpose of LOS defect detectionis not specified and is therefore left to the choice of the equipment designer. For testing conformance to the LOS detection requirement, it is sufficient to apply an all-zeros pattern lasting at most 2.3 μs, and to apply an all-zeros pattern lasting at least 100 μs.
Note that although an all-zeros pattern that lasts for less than 2.3 μs must not cause the detection of an LOS defect, an NE that receives a relatively long (in terms of the number of bit periods) all-zeros pattern of less than 2.3 μs is not necessarily expected to continue to operate error-free through that pattern.For example, in such cases it is possible that the NE’s clock recovery circuitry may drift off frequency due to the lack of incoming pulses, and therefore the NE may be “looking in the wrong bit positions” for the SONET framing pattern after the all-zeros pattern ends. If this occurs, it will continue for approximately 500 μs, at which point the NE will detect an SEF defect. The NE would then perform the actions associated with SEF defect detection (e.g., initiate a search for the “new” framing pattern position),rather than the actions associated with LOS defect detection (e.g., AIS and RDI insertion, possible protection switch initiation). In addition to monitoring for all-zeros patterns a SONET NE may also detect an LOS defect if the received signal level (e.g., the incoming optical power) drops below an implementation-determined threshold.
For more than 30 years, Ethernet has evolved to meet the growing demands of packet‐switched networks. It has become the unifying technology enabling communications via the Internet and other networks using Internet Protocol (IP). Due to its proven low cost, known reliability, and simplicity, the majority of today’s internet traffic starts or ends on an Ethernet connection. This popularity has resulted in a complex ecosystem between carrier networks, enterprise networks, and consumers creating a symbiotic relationship between its various parts.In 2006, the IEEE 802.3 working group formed the Higher Speed Study Group (HSSG) and found that the Ethernet ecosystem needed something faster than 10 Gigabit Ethernet. The growth in bandwidth for network aggregation applications was found to be outpacing the capabilities of networks employing link aggregation with 10 Gigabit Ethernet. As the HSSG studied the issue, it was determined that computing and network aggregation applications were growing at different rates. For the first time in the history of Ethernet, a Higher Speed Study Group determined that two new rates were needed: 40 gigabit per second for server and computing applications and 100 gigabit per second for network aggregation applications.The IEEE P802.3ba 40 Gb/s and 100 Gb/s Ethernet Task Force was formed in January 2008 to develop a 40 Gigabit Ethernet and 100 Gigabit Ethernet draft standard. Encompassed in this effort was the development of physical layer specifications for communication across backplanes, copper cabling, multi‐ mode fibre, and single‐mode fibre. Continued efforts by the Task Force led to the approval of the IEEE Std 802.3ba‐2010 40 Gb/s and 100 Gb/s Ethernet amendment to the IEEE Std 802.3‐2008 Ethernet standard on June 17, 2010 by the IEEE Standards Board.
OBJECTIVE
The objectives that drove the development of this standard are that it
Support full‐duplex operation only
Preserve the 802.3 / Ethernet frame format utilizing the 802.3 media access controller (MAC)
Preserve minimum and maximum frame size of current 802.3 standard
Support a bit error rate (BER) better than or equal to 10‐12 at the MAC/ physical layer service interface
Provide appropriate support for optical transport network (OTN)
Support a MAC data rate of 40 gigabit per second
Provide physical layer specifications which support 40 gigabit per second operation over:
at least 10km on single mode fibre (SMF)
at least 100m on OM3 multi‐mode fibre (MMF)
at least 7m over a copper cable assembly
at least 1m over a backplane
Support a MAC data rate of 100 gigabit per second
Provide physical layer specifications which support 100 gigabit per second operation over:
at least 40km on SMF
at least 10km on SMF
at least 100m on OM3 MMF
at least 7m over a copper cable assembly
ARCHITECTURE
The 40 Gb/s media system defines a physical layer (PHY) that is composed of a set of IEEE sublayers. Figure 1-1 shows the sublayers involved in the PHY. The standard defines an XLGMII logical interface, using the Roman numerals XL to indicate 40 Gb/s. This interface includes a 64-bit-wide path over which frame data bits are sent to the PCS. The FEC and Auto-Negotiation sublayers may or may not be used, depending on the media type involved.
The 100 Gb/s media system defines a physical layer (PHY) that is composed of a set of IEEE sublayers. Figure 1-2 shows the sublayers involved in the PHY. The standard defines a CGMII logical interface, using the Roman numeral C to indicate 100 Gb/s. This interface defines a 64-bit-wide path, over which frame data bits are sent to the PCS. The FEC and AN sublayers may or may not be used, depending on the media type involved.
Figure 1-2
PCS ( Physical Coding Sublayer) LANES
To help meet the engineering challenges of providing 40 Gb/s data flows, the IEEE engineers provided a multilane distribution system for data through the PCS sublayer of the Ethernet interface.
the PCS translates between the respective media independent interface (MII) for each rate and the PMA sublayer. The PCS is responsible for the encoding of data bits into code groups for transmission via the PMA and the subsequent decoding of these code groups from the PMA. The Task Force developed a low‐overhead multilane distribution scheme for the PCS for 40 Gigabit Ethernet and 100 Gigabit Ethernet.
This scheme has been designed to support all PHY types for both 40 Gigabit Ethernet and 100 Gigabit Ethernet. It is flexible and scalable, and will support any future PHY types that may be developed, based on future advances in electrical and optical transmission. The PCS layer also performs the following functions:
Delineation of frames
Transport of control signals
Ensures necessary clock transition density needed by the physical optical and electrical technology
Stripes and re‐assembles the information across multiple lanes
The PCS leverages the 64B/66B coding scheme that was used in 10 Gigabit Ethernet. It provides a number of useful properties including low overhead and sufficient code space to support necessary code words, consistent with 10 Gigabit Ethernet.
PCS lane for 10 Gb/s Ethernet
The multilane distribution scheme developed for the PCS is fundamentally based on a striping of the 66‐bit blocks across multiple lanes. The mapping of the lanes to the physical electrical and optical channels that will be used in any implementation is complicated by the fact that the two sets of interfaces are not necessarily coupled. Technology development for either a chip interface or an optical interface is not always tied together. Therefore, it was necessary to develop an architecture that would enable the decoupling between the evolution of the optical interface widths and the evolution of the electrical interface widths.
The transmit PCS, therefore, performs the initial 64B/66B encoding and scrambling on the aggregate channel (40 or 100 gigabits per second) before distributing 66‐bit block in a round robin basis across the multiple lanes, referred to as “PCS Lanes,” as illustrated in Figure 2.
The number of PCS lanes needed is the least common multiple of the expected widths of optical and electrical interfaces. For 100 Gigabit Ethernet, 20 PCS lanes have been chosen. The number of electrical or optical interface widths supportable in this architecture is equivalent to the number of factors of the total PCS lanes. Therefore, 20 PCS lanes support interface widths of 1, 2, 4, 5, 10 and 20 channels or wavelengths. For 40 Gigabit Ethernet 4 PCS lanes support interface widths of 1, 2, and 4 channels or wavelengths.
Figure 2- Virtual lane data distribution
Once the PCS lanes are created they can then be multiplexed into any of the supportable interface widths. Each PCS lane has a unique lane marker, which is inserted once every 16,384 blocks. All multiplexing is done at the bit‐level. The round‐robin bit‐level multiplexing can result in multiple PCS lanes being multiplexed into the same physical channel. The unique property of the PCS lanes is that no matter how they are multiplexed together, all bits from the same PCS lane follow the same physical path, regardless of the width of the physical interface. This enables the receiver to be able to correctly re‐assemble the aggregate channel by first de‐multiplexing the bits to re‐assemble the PCS lane and then re‐align the PCS lanes to compensate for any skew. The unique lane marker also enables the de‐skew operation in the receiver. Bandwidth for these lane markers is created by periodically deleting inter‐packet gaps (IPG). These alignment blocks are also shown in Figure 2.
The receiver PCS realigns multiple PCS lanes using the embedded lane markers and then re‐orders the lanes into their original order to reconstruct the aggregate signal.
Two key advantages of the PCS multilane distribution methodology are that all the encoding, scrambling and de‐skew functions can all be implemented in a CMOS device (which is expected to reside on the host device), and minimal processing of the data bits (other than bit muxing) happens in the high speed electronics embedded with an optical module. This will simplify the functionality and ultimately lower the costs of these high‐speed optical interfaces.
The PMA sublayer enables the interconnection between the PCS and any type of PMD sublayer. A PMA sublayer will also reside on either side of a retimed interface, referred to as “XLAUI” (40 gigabit per second attachment unit interface) for 40 Gigabit Ethernet or “CAUI” (100 gigabit per second attachment unit interface) for 100 Gigabit Ethernet.
PCS multilane for 40 Gb/s Ethernet
PCS lanes over a faster media system
100 Gb/s multilane transmit operation
100 Gb/s multi-lane receive operation
Summary
Ethernet has become the unifying technology enabling communications via the Internet and other networks using IP. Its popularity has resulted in a complex ecosystem between carrier networks, data centers, enterprise networks, and consumers with a symbiotic relationship between the various parts.
100 GbE and 40 GbE technologies are rapidly approaching standardization and deployment. A key factor in their success will be ability to utilize existing fiber and copper media in an environment of advancing technologies. The physical coding sublayer (PCS) of the 802.3 architecture is in a perfect position to facilitate this flexibility. The current baseline proposal for PCS implementation uses a unique virtual lane concept that provides the mechanism to handle differing electrical and optical paths.
Notes:64b/66b is a line code that transforms 64-bit data to 66-bit line code to provide enough state changes to allow reasonable clock recovery and facilitate alignment of the data stream at the receiver.
PAUSE frames are mechanism used in Ethernet flow control that allows an interface or switch port to send a signal requesting a short pause in frame transmission.
The PAUSE system of flow control on full-duplex link segments, originally defined in 802.3x, uses MAC control frames to carry the PAUSE commands. The MAC control opcode for a PAUSE command is 0x0001 (hex). A station that receives a MAC control frame with this opcode in the first two bytes of the data field knows that the control frame is being used to implement the PAUSE operation, for the purpose of providing flow control on a full-duplex link segment. Only stations configured for full- duplex operation may send PAUSE frames.
When a station equipped with MAC control wishes to send a PAUSE command, it sends a PAUSE frame to the 48-bit destination multicast address of 01-80-C2-00-00-01. This particular multicast address has been reserved for use in PAUSE frames. Having a well- known multicast address simplifies the flow control process by making it unnecessary for a station at one end of the link to discover and store the address of the station at the other end of the link.
Another advantage of using this multicast address arises from the use of flow control on full-duplex segments between switches. The particular multicast address used was selected from a range of addresses reserved by the IEEE 802.1D standard, which specifies basic Ethernet switch (bridge) operation. Normally, a frame with a multicast destination address that is sent to a switch will be forwarded out all other ports of the switch. How‐ ever, this range of multicast addresses is special—they will not be forwarded by an 802.1D-compliant switch. Instead, frames sent to these addresses are understood by the switch to be frames meant to be acted upon within the switch.
A station sending a PAUSE frame to the special multicast address includes not only the PAUSE opcode, but also the period of pause time being requested, in the form of a two- byte integer. This number contains the length of time for which the receiving station is requested to stop transmitting data. The pause time is measured in units of pause “quanta,” where each unit is equal to 512 bit times. The range of possible pause time requests is from 0 through 65,535 units.
Figure 1 shows what a PAUSE frame looks like. The PAUSE frame is carried in the data field of the MAC control frame. The MAC control opcode of 0x0001 indicates that this is a PAUSE frame. The PAUSE frame carries a single parameter, defined as the pause_time in the standard. In this example, the content of pause_time is 2, indicating a request that the device at the other end of the link stop transmitting for a period of two pause quantas (1,024 bit times total).
By using MAC control frames to send PAUSE requests, a station at one end of a full- duplex link can request the station at the other end of the link to stop transmitting frames for a period of time. This provides real-time flow control between switches, or between a switch and a server that are equipped with the optional MAC control software and connected by a full-duplex link.
The organization of the Ethernet frame is central to the operation of the system. The Ethernet standard determines both the structure of a frame and when a station is allowed to send a frame. The frame was first defined in the original Ethernet DEC-Intel-Xerox (DIX) standard, and was later redefined and modified in the IEEE 802.3 standard. The changes between the two standards were mostly cosmetic, except for the type or length field.
The DIX standard defined a type field in the frame. The first 802.3 standard (published in 1985) specified this field as a length field, with a mechanism that allowed both versions of frames to coexist on the same Ethernet system. Most networking software kept using the type field version of the frame. A later version of the IEEE 802.3 standard was changed to define this field of the frame as being either length or type, depending on usage.
Figure 1-1 shows the DIX and IEEE versions of the Ethernet frame. There are three sizes of frame currently defined in the standard, and a given Ethernet interface must support at least one of them. The standard recommends that new implementations support the most recent frame definition, called an envelope frame, which has a maximum size of 2,000 bytes. The two other sizes are basic frames, with a maximum size of 1,518 bytes, and Q-tagged frames with a maximum of 1,522 bytes.
Because the DIX and IEEE basic frames both have a maximum size of 1,518 bytes and are identical in terms of the number and length of fields, Ethernet interfaces can send either DIX or IEEE basic frames. The only difference in these frames is in the contents of the fields and the subsequent interpretation of those contents by the network interface software.
Now, we’ll take a detailed tour of the frame fields.
Preamble
The frame begins with the 64-bit preamble field, which was originally incorporated to allow 10 Mb/s Ethernet interfaces to synchronize with the incoming data stream before the fields relevant to carrying the content arrived.
The preamble was initially provided to allow for the loss of a few bits due to signal start- up delays as the signal propagates through a cabling system. Like the heat shield of a spacecraft, which protects the spacecraft from burning up during reentry, the preamble was originally developed as a shield to protect the bits in the rest of the frame when operating at 10 Mb/s.
The original 10 Mb/s cabling systems could include long stretches of coaxial cables, joined by signal repeaters. The preamble ensures that the entire path has enough time to start up, so that signals are re‐ ceived reliably for the rest of the frame.
The higher-speed Ethernet systems use more complex mechanisms for encoding the signals that avoid any signal start-up losses, and these systems don’t need a preamble to protect the frame signals. However, it is maintained for backward compatibility with the original Ethernet frame and to provide some extra timing for interframe house‐ keeping, as demonstrated, for example, in the 40 Gb/s system.
While there are differences in how the two standards formally defined the preamble bits, there is no practical difference between the DIX and IEEE preambles. The pattern of bits being sent is identical:
DIX standard
In the DIX standard, the preamble consists of eight “octets,” or 8-bit bytes. The first seven comprise a sequence of alternating ones and zeros. The eighth byte of the preamble contains 6 bits of alternating ones and zeros, but ends with the special pattern of “1, 1.” These two bits signal to the receiving interface that the end of the preamble has been reached, and that the bits that follow are the actual fields of the frame.
IEEE standard
In the 802.3 specification, the preamble field is formally divided into two parts consisting of seven bytes of preamble and one byte called the start frame delimit‐er (SFD). The last two bits of the SFD are 1, 1, as with the DIX standard.
Destination Address
The destination address field follows the preamble. Each Ethernet interface is assigned a unique 48-bit address, called the interface’s physical or hardware address. The desti‐nation address field contains either the 48-bit Ethernet address that corresponds to the address of the interface in the station that is the destination of the frame, a 48-bit mul‐ ticast address, or the broadcast address.
Ethernet interfaces read in every frame up through at least the destination address field. If the destination address does not match the interface’s own Ethernet address, or one of the multicast or broadcast addresses that the interface is programmed to receive, then the interface is free to ignore the rest of the frame. Here is how the two standards implement destination addresses:
DIX standard
The first bit of the destination address, as sent onto the network medium, is used to distinguish physical addresses from multicast addresses. If the first bit is zero, then the address is the physical address of an interface, which is also known as a unicast address, because a frame sent to this address only goes to one destination. If the first bit of the address is a one, then the frame is being sent to a multicast address. If all 48 bits are ones, this indicates the broadcast, or all-stations, address.
IEEE standard
The IEEE 802.3 version of the frame adds significance to the second bit of the destination address, which is used to distinguish between locally and globally ad‐ ministered addresses. A globally administered address is a physical address assigned to the interface by the manufacturer, which is indicated by setting the second bit to zero. (DIX Ethernet addresses are always globally administered.) If the address of the Ethernet interface is administered locally for some reason, then the second bit is supposed to be set to a value of one. In the case of a broadcast address, the second bit and all other bits are ones in both the DIX and IEEE standards.
Locally administered addresses are rarely used on Ethernet systems, because each Ethernet interfaces is assigned its own unique 48-bit address at the factory. Locally administered addresses, however, were used on some other local area network systems.
Understanding physical addresses
In Ethernet, the 48-bit physical address is written as 12 hexadecimal digits with the digits paired in groups of two, representing an octet (8 bits) of information. The octet order of transmission on the Ethernet is from the leftmost octet (as written or displayed) to the rightmost octet. The actual transmission order of bits within the octet, however, goes from the least significant bit of the octet through to the most significant bit.
This means that an Ethernet address that is written as the hexadecimal string F0-2E-15-6C-77-9B is equivalent to the following sequence of bits, sent over the Ether‐net channel from left to right:
Therefore, the 48-bit destination address that begins with the hexadecimal value 0xF0 is a unicast address, because the first bit sent on the channel is a zero.
Source Address
The next field in the frame is the source address. This is the physical address of the device that sent the frame. The source address is not interpreted in any way by the Ethernet MAC protocol, although it must always be the unicast address of the device sending the frame. It is provided for the use of high-level network protocols, and as an aid in troubleshooting. It is also used by switches to build a table associating source addresses with switch ports. An Ethernet station uses its physical address as the source address in any frame it transmits.
The DIX standard notes that a station can change the Ethernet source address, while the IEEE standard does not specifically state that an interface may have the ability to override the 48-bit physical address assigned by the manufacturer. However, all Ethernet interfaces in use these days appear to allow the physical address to be changed, which makes it possible for the network administrator or the high-level network software to modify the Ethernet interface address if necessary.
To provide the physical address used in the source address field, a vendor of Ethernet equipment acquires an organizationally unique identifier (OUI), which is a unique 24- bit identifier assigned by the IEEE. The OUI forms the first half of the physical address of any Ethernet interface that the vendor manufactures. As each interface is manufac‐ tured, the vendor also assigns a unique address to the interface using the second 24 bits of the 48-bit address space, and that, combined with the OUI, creates the 48-bit address. The OUI may make it possible to identify the vendor of the interface chip, which can sometimes be helpful when troubleshooting network problems.
Q-Tag
The Q-tag is so called because it carries an 802.1Q tag, also known as a VLAN or priority tag. The 802.1Q standard defines a virtual LAN (VLAN) as one or more switch ports that function as a separate and independent Ethernet system on a switch. Ethernet traffic within a given VLAN (e.g., VLAN 100) will be sent and received only on those ports of the switch that are defined to be members of that particular VLAN (in this case, VLAN 100). A 4-byte-long Q-tag is inserted in an Ethernet frame between the source address and the length/type field to identify the VLAN to which the frame belongs. When a Q- Tag is present, the minimum data field size is reduced to 42 bytes, maintaining a min‐ imum frame size of 64 bytes.
Switches can be connected together with an Ethernet segment that functions as a trunkconnection that carries Ethernet frames with VLAN tags in them. That, in turn, makes it possible for Ethernet frames belonging to VLAN 100, for example, to be carried be‐ tween multiple switches and sent or received on switch ports that are assigned to VLAN 100.
VLAN tagging, a vendor innovation, was originally accomplished using a variety of proprietary approaches. Development of the IEEE 802.1Q standard for virtual bridged LANs produced the VLAN tag as a vendor-neutral mechanism for identifying which VLAN a frame belongs to.
The addition of the 4-byte VLAN tag causes the maximum size of an Ethernet frame to be extended from the original maximum of 1,518 bytes (not including the preamble) to a new maximum of 1,522 bytes. Because VLAN tags are only added to Ethernet frames by switches and other devices that have been programmed to send and receive VLAN- tagged frames, this does not affect traditional, or “classic,” Ethernet operation.
The first two bytes of the Q-tag contain an Ethernet type identifier of 0x8100. If an Ethernet station that is not programmed to send or receive a VLAN tagged frame hap‐ pens to receive a tagged frame, it will see what looks like a type identifier for an unknown protocol type and simply discard the frame.
Envelope Prefix and Suffix
As networks grew in complexity and features, the IEEE received requests for more tags to achieve new goals. The VLAN tag provided space for a VLAN ID and Class of Service (CoS) bits, but vendors and standards groups wanted to add extra tags to support new bridging features and other schemes.
To accommodate these requests, the 802.3 standards engineers defined an “envelope frame,” which adds an extra 482 bytes to the maximum frame size. The envelope frame was specified in the 802.3as supplement to the standard, adopted in 2006. In another change, the tag data was added to the data field to produce a MAC Client Data field. Because the MAC client data field includes the tagging fields, it may seem like the frame size definition has changed, but in fact this is just a way of referring to the combination of tag data and the data field for the purpose of defining the envelope frame.
The 802.3as supplement modified the standard to state that an Ethernet implementation should support at least one of three maximum MAC client data field sizes. The data field size continues to be defined as 46 to 1,500 bytes, but to that is added the tagging infor‐ mation to create the MAC client data field, resulting in the following MAC client data field sizes:
1,500-byte “basic frames” (no tagging information)1,982-byte “envelope frames” (1,500-byte data field plus 482 bytes for all tags)
1,504-byte “Q-tagged frames” (1,500-byte data field plus 4-byte tag)
The contents of the tag space are not defined in the Ethernet standard, allowing maxi‐ mum flexibility for the other standards to provide tags in Ethernet frames. Either or both prefix and suffix tags can be used in a given frame, occupying a maximum tag space of 482 bytes if either or both are present. This can result in a maximum frame size of 2,000 bytes.
The latest standard simply includes the Q-tag as one of the tags that can be carried in an envelope prefix. The standard notes, “All Q-tagged frames are envelope frames, but not all envelope frames are Q-tagged frames.” In other words, you can use the envelope space for any kind of tagging, and if you use a Q-tag, then it is carried in the envelope prefix as defined in the latest standard. An envelope frame carrying a Q-tag will have a minimum data size of 42 bytes, preserving the minimum frame size of 64 bytes.
Tagged frames are typically sent between switch ports that have been configured to add and remove tags as necessary to achieve their goals. Those goals can include VLAN operations and tagging a frame as a member of a given VLAN, or more complex tagging schemes to provide information for use by higher-level switching and routing protocols. Normal stations typically send basic Ethernet frames without tags, and will drop tagged frames that they are not configured to accept.
Type or Length Field
The old DIX standard and the IEEE standard implement the type and/or length fields differently:
DIX standard
In the DIX Ethernet standard, this 16-bit field is called a type field, and it always contains an identifier that refers to the type of high-level protocol data being carried in the data field of the Ethernet frame. For example, the hexadecimal value 0x0800 has been assigned as the identifier for the Internet Protocol (IP). A DIX frame being used to carry an IP packet is sent with the value of 0x0800 in the type field of the frame. All IP packets are carried in frames with this value in the type field.
IEEE standard
When the IEEE 802.3 standard was first published in 1985, the type field was not included, and instead the IEEE specifications called this field a length field. Type fields were added to the IEEE 802.3 standard in 1997, so the use of a type field in the frame is officially recognized in 802.3. This change simply made the common practice of using the type field an official part of the standard. The identifiers used in the type field were originally assigned and maintained by Xerox, but with the type field now part of the IEEE standard, the responsibility for assigning type num‐ bers was transferred to the IEEE.
In the IEEE 802.3 standard, this field is called a length/type field, and the hexadecimal value in the field indicates the manner in which the field is being used. The first octet of the field is considered the most significant octet in terms of numeric value.
If the value in this field is numerically less than or equal to 1,500 (decimal), then the field is being used as a length field. In that case, the value in the field indicates the number of logical link control (LLC) data octets that follow in the data field of the frame. If the number of LLC octets is less than the minimum required for the data field of the frame, then octets of padding data will automatically be added to make the data field large enough. The content of the padding data is unspecified by the standard. Upon reception of the frame, the length field is used to determine the length of valid data in the data field, and the padding data is discarded.
If the value in this field of the frame is numerically greater than or equal to 1,536 decimal (0x600 hex), then the field is being used as a type field.The range of 1,501 to 1,535 was intentionally left undefined in the standard.
In that case, the hexadecimal identifier in the field is used to indicate the type of protocol data being carried in the data field of the frame. The network software on the station is responsible for providing any padding data required to ensure that the data field is 46 bytes in length. With this method, there is no conflict or ambiguity about whether the field indicates length or type.
Data Field
Next comes the data field of the frame, which is also treated differently in the two standards:
DIX standard
In a DIX frame, this field must contain a minimum of 46 bytes of data, and may range up to a maximum of 1,500 bytes of data. The network protocol software is expected to provide at least 46 bytes of data.
IEEE standard
The total size of the data field in an IEEE 802.3 frame is the same as in a DIX frame: a minimum of 46 bytes and a maximum of 1,500. However, a logical link control protocol defined in the IEEE 802.2 LLC standard may ride in the data field of the 802.3 frame to provide control information. The LLC protocol is also used as a way to identify the type of protocol data being carried by the frame if the type/length field is used for length information. The LLC protocol data unit (PDU) is carried in the first set of bytes in the data field of the IEEE frame. The structure of the LLC PDU is defined in the IEEE 802.2 LLC standard.
The process of figuring out which protocol software stack gets the data in an incoming frame is known as demultiplexing. An Ethernet frame may use the type field to identify the high-level protocol data being carried by the frame. In the LLC specification, the receiving station demultiplexes the frame by deciphering the contents of the logical link control protocol data unit.
FCS Field
The last field in both the DIX and IEEE frames is the frame check sequence (FCS) field, also called the cyclic redundancy check (CRC). This 32-bit field contains a value that is used to check the integrity of the various bits in the frame fields (not including the preamble/SFD). This value is computed using the CRC, a polynomial that is calculated using the contents of the destination, source, type (or length), and data fields. As the frame is generated by the transmitting station, the CRC value is simultaneously being calculated. The 32 bits of the CRC value that are the result of this calculation are placed in the FCS field as the frame is sent. The x31 coefficient of the CRC polynomial is sent as the first bit of the field, and the x0 coefficient as the last.
The CRC is calculated again by the interface in the receiving station as the frame is read in. The result of this second calculation is compared with the value sent in the FCS field by the originating station. If the two values are identical, then the receiving station is provided with a high level of assurance that no errors have occurred during transmission over the Ethernet channel. If the values are not identical, then the interface can discard the frame and increment the frame error counter.
End of Frame Detection
The presence of a signal on the Ethernet channel is known as carrier. The transmitting interface stops sending data after the last bit of a frame is transmitted, which causes the Ethernet channel to become idle. In the original 10 Mb/s system, the loss of carrier when the channel goes idle signals to the receiving interface that the frame has ended. When the interface detects loss of carrier, it knows that the frame transmission has come to an end. The higher-speed Ethernet systems use more complex signal encoding schemes, which have special symbols available for signaling to the interface the start and end of a frame.
A basic frame carrying a maximum data field of 1,500 bytes is actually 1,518 bytes in length (not including the preamble) when the 18 bytes needed for the addresses, length/ type field, and the frame check sequence are included. The addition of a further 482 bytes for envelope frames makes the maximum frame size become 2,000 bytes. This was chosen as a useful maximum frame size that could be handled by a typical Ethernet implementation in an interface or switch port, while providing enough room for current and future prefixes and suffixes.
Auto-Negotiation for fiber optic media segments turned out to be sufficiently difficult to achieve that most Ethernet fiber optic segments do not support Auto-Negotiation. During the development of the Auto-Negotiation standard, attempts were made to de‐ velop a system of Auto-Negotiation signaling that would work on the 10BASE-FL and 100BASE-FX fiber optic media systems.
However, these two media systems use different wavelengths of light and different signal timing, and it was not possible to come up with an Auto-Negotiation signaling standard that would work on both. That’s why there is no IEEE standard Auto-Negotiation sup‐ port for these fiber optic link segments. The same issues apply to 10 Gigabit Ethernet segments, so there is no Auto-Negotiation system for fiber optic 10 Gigabit Ethernet media segments either.
The 1000BASE-X Gigabit Ethernet standard, on the other hand, uses identical signal encoding on the three media systems defined in 1000BASE-X. This made it possible to develop an Auto-Negotiation system for the 1000BASE-X media types, as defined in Clause 37 of the IEEE 802.3 standard.
This lack of Auto-Negotiation on most fiber optic segments is not a major problem, given that Auto-Negotiation is not as useful on fiber optic segments as it is on twisted- pair desktop connections. For one thing, fiber optic segments are most often used as network backbone links, where the longer segment lengths supported by fiber optic media are most effective. Compared to the number of desktop connections, there are far fewer backbone links in most networks. Further, an installer working on the back‐ bone of the network can be expected to know which fiber optic media type is being connected and how it should be configured.
The MEF (Metro Ethernet Forum) has defined Carrier Ethernet as the “ubiquitous, standardized, Carrier-class service defined by five attributes that distinguish Carrier Ethernet from the familiar LAN based Ethernet.” As depicted in Figure , these five attributes, in no particular order, are
1. Standardized services
•E-Line, E-LAN provide transparent, private line, virtual private line and LAN services
•A ubiquitous service providing globally & locally via standardized equipment
•Requires no changes to customer LAN equipment or networks and accommodates existing network connectivity such as, time-sensitive, TDM traffic and signaling
•Ideally suited to converged voice, video & data networks
•Wide choice and granularity of bandwidth and quality of service options
2. Scalability
•The ability for millions to use a network service that is ideal for the widest variety of business, information, communications and entertainment applications with voice, video and data
•Spans Access & Metro to National & Global Services over a wide variety of physical infrastructures implemented by a wide range of Service Providers
•Scalability of bandwidth from 1Mbps to 10Gbps and beyond, in granular increments
3. Reliability
•The ability for the network to detect & recover from incidents without impacting users
•Meeting the most demanding quality and availability requirements
•Rapid recovery time when problems do occur, as low as 50ms
4. Quality of Service (QoS)
•Wide choice and granularity of bandwidth and quality of service options
•Service Level Agreements (SLAs) that deliver end-to-end performance matching the requirements for voice, video and data over converged business and residential networks
•Provisioning via SLAs that provide end-to-end performance based on CIR, frame loss, delay and delay variation characteristics
5. Service management
•The ability to monitor, diagnose and centrally manage the network, using standards-based vendor independent implementations
•Carrier-class OAM
•Rapid service provisioning
What is Carrier Ethernet?
Carrier Ethernet essentially augments traditional Ethernet, optimized for LAN deployment,with Carrier-class capabilities which make it optimal for deployment in Service Provider Access/Metro Area Networks and beyond, to the Wide Area Network. And conversely,from an end-user (enterprise) standpoint, Carrier Ethernet is a service that not only provides a standard Ethernet (or for that matter, a standardized non-Ethernet) hand-off but also provides the robustness, deterministic performance, management, and flexibility expected of Carrier-class services.
Carrier Ethernet Architecture
Data moves from UNI to UNI across “the network” with a layered architecture.
When traffic moves between ETH domains is does so at the TRAN layer. This allows Carrier Ethernet traffic to be
agnostic to the networks that it traverses
MEF Carrier Ethernet Terminology
•The User Network Interface (UNI)
–The UNI is always provided by the Service Provider
–The UNI in a Carrier Ethernet Network is a physical Ethernet Interface at operating speeds 10Mbs, 100Mbps, 1Gbps or 10Gbps
•Ethernet Virtual Connection (EVC)
–Service container
–Connects two or more subscriber sites (UNI’s)
–An association of two or more UNIs
–Prevents data transfer between sites that are not part of the same EVC
–Three types of EVCs
•Point-to-Point
•Multipoint-to-Multipoint
•Rooted Multipoint
–Can be bundled or multiplexed on the same UNI
–Defined in MEF 10.2 technical specification
Carrier Ethernet Terminology
•UNI Type I
–A UNI compliant with MEF 13
–Manually Configurable
•UNI Type II
–Supports E-Tree
–Support service OAM, link protection
–Automatically Configurable via E-LMI
–Manageable via OAM
•Network to Network Interface (NNI)
–Network to Network Interface between distinct MEN operated by one or more carriers
–An active project of the MEF
•Metro Ethernet Network (MEN)
–An Ethernet transport network connecting user end-points
(Expanded to Access and Global networks in addition to the original Metro Network meaning)
Carrier Ethernet Service Types
Services Using E-Line Service Type
Ethernet Private Line (EPL)
•Replaces a TDM Private line
•Port-based service with single service (EVC) across dedicated UNIs providing site-to-site connectivity
•Typically delivered over SDH (Ethernet over SDH)
•Most popular Ethernet service due to its simplicity
Ethernet Virtual Private Line (EVPL)
•Replaces Frame Relay or ATM L2 VPN services
–To deliver higher bandwidth, end-to-end services
•Enables multiple services (EVCs) to be delivered over single physical connection (UNI) to customer premises
•Supports “hub & spoke” connectivity via Service Multiplexed UNI at hub site
–Similar to Frame Relay or Private Line hub and spoke deployments
Services Using E-LAN Service Type
•EP-LAN: Each UNI dedicated to the EP-LAN service. Example use is Transparent LAN
•EVP-LAN: Service Multiplexing allowed at each UNI. Example use is Internet access and corporate VPN via one UNI
Services Using E-Tree Service Type
EP-Tree and EVP-Tree: Both allow root – root and root – leaf communication but not leaf – leaf communication.
•EP-Tree requires dedication of the UNIs to the single EP-Tree service
•EVP-Tree allows each UNI to be support multiple simultaneous services at the cost of more complex configuration that EP-Tree
APPLICATION OF CARRIER ETHERNET
The Standardization of Services: Approved MEF Specifications
•MEF 2 Requirements and Framework for Ethernet Service Protection
•MEF 3 Circuit Emulation Service Definitions, Framework and Requirements in Metro Ethernet Networks
•MEF 4 Metro Ethernet Network Architecture Framework
Part 1: Generic Framework
•MEF 6 Metro Ethernet Services Definitions Phase I
•MEF 7 EMS-NMS Information Model
•MEF 8 Implementation Agreement for the Emulation of PDH Circuits over Metro Ethernet Networks
•MEF 9 Abstract Test Suite for Ethernet Services at the UNI
•MEF 10 Ethernet Services Attributes Phase I
•MEF 11 User Network Interface (UNI) Requirements and Framework
•MEF 12 Metro Ethernet Network Architecture Framework
Part 2: Ethernet Services Layer
•MEF 13 User Network Interface (UNI) Type 1 Implementation Agreement
•MEF 14 Abstract Test Suite for Traffic Management Phase 1
•MEF 15 Requirements for Management of Metro Ethernet
Phase 1 Network Elements
•MEF 16 Ethernet Local Management Interface
How the MEF Specifications Enable Carrier Ethernet
When Ethernet was developed it was recognized that the use of repeaters to connect segments to form a larger network would result in pulse regeneration delays that could adversely affect the probability of collisions. Thus, a limit was required on the number of repeaters that could be used to connect segments together. This limit in turn limited the number of segments that could be interconnected. A further limitation involved the number of populated segments that could be joined together, because stations on populated segments generate traffic that can cause collisions, whe reas non-populated segments are more suitable for extending the length of a network of interconnected segments. A result of the preceding was the ‘‘5-4-3 rule.’’ That rule specifies that a maximum of five Ethernet segments can be joined through the use of a maximum of four repeaters. In actuality, this part of the Ethernet rule really means that no two communicating Ethernet nodes can be more than two repeaters away from one another. Finally, the ‘‘three’’ in the rule denotes the maximum number of Ethernet segments that can be populated. Figure illustrates an example of the 5-4-3 rule for the original bus-based Ethernet.
The Optical Time Domain Reflectometer (OTDR) is useful for testing the integrity of fiber optic cables. An optical time-domain reflectometer (OTDR) is an optoelectronic instrument used to characterize an optical fiber. An OTDR is the optical equivalent of an electronic time domain reflectometer. It injects a series of optical pulses into the fiber under test. It also extracts, from the same end of the fiber, light that is scattered (Rayleigh backscatter) or reflected back from points along the fiber. The strength of the return pulses is measured and integrated as a function of time, and plotted as a function of fiber length.
Using an OTDR, we can:
1. Measure the distance to a fusion splice, mechanical splice, connector, or significant bend in the fiber.
2. Measure the loss across a fusion splice, mechanical splice, connector, or significant bend in the fiber.
3. Measure the intrinsic loss due to mode-field diameter variations between two pieces of single-mode optical fiber connected by a splice or connector.
4. Determine the relative amount of offset and bending loss at a splice or connector joining two single-mode fibers.
5. Determine the physical offset at a splice or connector joining two pieces of single-mode fiber, when bending loss is insignificant.
6. Measure the optical return loss of discrete components, such as mechanical splices and connectors.
7. Measure the integrated return loss of a complete fiber-optic system.
8. Measure a fiber’s linearity, monitoring for such things as local mode-field pinch-off.
9. Measure the fiber slope, or fiber attenuation (typically expressed in dB/km).
10. Measure the link loss, or end-to-end loss of the fiber network.
11. Measure the relative numerical apertures of two fibers.
12. Make rudimentary measurements of a fiber’s chromatic dispersion.
13. Measure polarization mode dispersion.
14. Estimate the impact of reflections on transmitters and receivers in a fiber-optic system.
15. Provide active monitoring on live fiber-optic systems.
16. Compare previously installed waveforms to current traces.
Chromatic dispersion affects all optical transmissions to some degree.These effects become more pronounced as the transmission rate increases and fiber length increases.
Factors contributing to increasing chromatic dispersion signal distortion include the following:
1. Laser spectral width, modulation method, and frequency chirp. Lasers with wider spectral widths and chirp have shorter dispersion limits. It is important to refer to manufacturer specifications to determine the total amount of dispersion that can be tolerated by the lightwave equipment.
2. The wavelength of the optical signal. Chromatic dispersion varies with wavelength in a fiber. In a standard non-dispersion shifted fiber (NDSF G.652), chromatic dispersion is near or at zero at 1310 nm. It increases positively with increasing wavelength and increases negatively for wavelengths less than 1310 nm.
3. The optical bit rate of the transmission laser. The higher the fiber bit rate, the greater the signal distortion effect. 4. The chromatic dispersion characteristics of fiber used in the link. Different types of fiber have different dispersion characteristics. 5. The total fiber link length, since the effect is cumulative along the length of the fiber. 6. Any other devices in the link that can change the link’s total chromatic dispersion including chromatic dispersion compensation modules. 7. Temperature changes of the fiber or fiber cable can cause small changes to chromatic dispersion. Refer to the manufacturer’s fiber cable specifications for values.
Methods to Combat Link Chromatic Dispersion
1. Change the equipment laser with a laser that has a specified longer dispersion limit. This is typically a laser with a more narrow spectral width or a laser that has some form of precompensation. As laser spectral width decreases, chromatic dispersion limit increases. 2. For new construction, deploy NZ-DSF instead of SSMF fiber.NZ-DSF has a lower chromatic dispersion specification. 3. Insert chromatic dispersion compensation modules (DCM) into the fiber link to compensate for the excessive dispersion. The optical loss of the DCM must be added to the link optical loss budget and optical amplifiers may be required to compensate. 4. Deploy a 3R optical repeater (re-amplify, reshape, and retime the signal) once a link reaches chromatic dispersion equipment limit. 5. For long haul undersea fiber deployment, splicing in alternating lengths of dispersion compensating fiber can be considered. 6. To reduce chromatic dispersion variance due to temperature, buried cable is preferred over exposed aerial cable.
The maintenance signals defined in [ITU-T G.709] provide network connection status information in the form of payload missing indication (PMI), backward error and defect indication (BEI, BDI), open connection indication (OCI), and link and tandem connection status information in the form of locked indication (LCK) and alarm indication signal (FDI, AIS).
Interaction diagrams are collected from ITU G.798 and OTN application note from IpLight