Animated CTA Banner
MapYourTech
MapYourTech has always been about YOUR tech journey, YOUR questions, YOUR thoughts, and most importantly, YOUR growth. It’s a space where we "Map YOUR Tech" experiences and empower YOUR ambitions.
To further enhance YOUR experience, we are working on delivering a professional, fully customized platform tailored to YOUR needs and expectations.
Thank you for the love and support over the years. It has always motivated us to write more, share practical industry insights, and bring content that empowers and inspires YOU to excel in YOUR career.
We truly believe in our tagline:
“Share, explore, and inspire with the tech inside YOU!”
Let us know what YOU would like to see next! Share YOUR thoughts and help us deliver content that matters most to YOU.
Share YOUR Feedback
Category

Technical

Category

The format of the label for SDH and/or SONET TDM-LSR link is:
https://docs.google.com/file/d/0BwE3NGerAe3tbXZ0NHhfUHpQbkk/edit?usp=sharing
0                    1                    2                    3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|               S            |   U    |   K    |   L      |   M      |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
For SDH, this is an extension of the numbering scheme defined in
G.707 section 7.3, i.e. the (K, L, M) numbering. For SONET, the same
signaling scheme is used in order to provide easy interworking between
SDH and SONET signaling. For the S field a STS-3 group, which
corresponds with the SDH AUG-1 level is introduce. The U field indicates
the position of the STS-3c-SPE or STS-1-SPE within the STS-3 group

1. S is the index of a particular AUG-1/STS-3 group. S=1->N
indicates a specific AUG-1/STS-3 group inside an STM-N/STS-3xN
multiplex. For example, S=1 indicates the first AUG-1/STS-3 group,
and S=N indicates the last AUG-1/STS-3 group of this multiplex.
S is not significant for STM-0/STS-1.

2. U indicates a specific VC/STS-SPE inside a given AUG-1/STS-3
group or STM-0/STS-1. U=1 indicates a single VC-4/STS-3c-SPE in a
AUG-1/STS-3 group or the single VC-3/STS-1-SPE in a STM-0/STS-1, while U=2->4
indicates a specific VC-3/STS-1-SPE inside the given AUG-1/STS-3
group.

3. K is only significant for VC-4/STS-3c and must be ignored for
higher order VC-3/STS-1-SPE. For SDH it indicates a specific branch of a VC-4.
K=1 indicates that the VC-4 is not further subdivided and
contains a C-4. K=2->4 indicates a specific TUG-3 inside the VC-4.
For a SONET STS-3c-SPE it is fixed to K=1 as SONET doesn’t support
substructured STS-3c-SPE.

4. L indicates a specific branch of a TUG-3, VC-3 or STS-1 SPE.
It is not significant for an unstructured VC-4/STS-3c-SPE. L=1
indicates that the TUG-3/VC-3/STS-1 SPE is not further
subdivided and contains a VC-3/C-3 in SDH or the equivalent in
SONET. L=2->8 indicates a specific TUG-2/VT Group inside the
corresponding higher order signal.

5. M indicates a specific branch of a TUG-2/VT Group. It is not
significant for an unstructured VC-4, STS-3c-SPE, TUG-3, VC-3 or STS-1 SPE.
M=1 indicates that the TUG-2/VT Group is not further subdivided
and contains a VC-2/VT-6 SPE. M=2->3 indicates a specific VT-3
inside the corresponding VT Group, these values MUST NOT be used
for SDH since there is no equivalent of VT-3 with SDH. M=4->6
indicates a specific VC-12/VT-2 SPE inside the corresponding
TUG-2/VT Group. M=7->10 indicates a specific VC-11/VT-1.5 SPE
inside the corresponding TUG-2/VT Group. Note that M=0 denotes
an unstructured VC-4, VC-3 or STS-1 SPE (easy for debugging).

The M encoding is summarized in the following table:

M    SDH                          SONET
———————————————————-
0    unstructured VC-4/VC-3  unstructured STS-1 SPE
1    VC-2                    VT-6
2    –                       1st VT-3
3    –                       2nd VT-3
4    1st VC-12               1st VT-2
5    2nd VC-12               2nd VT-2
6    3rd VC-12               3rd VT-2
7    1st VC-11               1st VT-1.5
8    2nd VC-11               2nd VT-1.5
9    3rd VC-11               3rd VT-1.5
10   4th VC-11               4th VT-1.5

In case of contiguous concatenation, the label that is used is the
lowest label of the contiguously concatenated signal as explained
before. The higher part of the label indicates where the signal
starts and the lowest part is not significant. For instance, when
requesting an VC-4-16c the label is S>0, U=0, K=0, L=0, M=0.

Examples of labels:

Example 1: S>0, U=1, K=1, L=0, M=0
Denotes the unstructured VC-4/STS-3c-SPE of the Sth AUG-1/STS-3 group.

Example 2: S>0, U=1, K>1, L=1, M=0
Denotes the unstructured VC-3 of the Kth-1 TUG-3 of the Sth AUG-1.

Example 3: S>0, U>1, K=0, L=1, M=0
Denotes the Uth unstructured VC-3/STS-1 SPE of the Sth AUG-1/STS-3 group.

Example 4: S>0, U>1, K=0, L>1, M=1
Denotes the VC-2/VT-6 in the Lth-1 TUG2/VT Group in the Uth VC-3/STS-1 SPE of the Sth AUG-1/STS-3 group.

Example 5: S>0, U>1, K=0, L>1, M=9
Denotes the 3rd VC-11/VT-1.5 in the Lth-1 TUG2/VT Group in the Uth VC-3/STS-1 SPE of the Sth AUG-1/STS-3 group.

Example 6: S>0, U=1, K>1, L>1, M=5
Denotes the 2nd VC-12 in the Lth-1 TUG2 in the Kth TUG3 in the VC-4 of the Sth AUG-1.

This Topic is legacy now but just to keep reader aware about the basic concepts, compiling  all together SONET/SDH here.

  • Bellcore  defines GR.253 (SONET) and  ITU-T defines G.691 (SDH)
  • SONET means Synchronous Optical NETwork
  • First North American Fiber-Optic Telecommunications Standard to overcome the limitations of the traditional asynchronous network.
  • Formulated by the Exchange Carriers Standards Association for the American National Standards Institute
  • Incorporated into the Synchronous Digital Hierarchy (SDH) recommendations of the International Telecommunications Union (ITU).
  • SDH is mostly used outside North America and principally in Europe
  • Synchronous systems (e. g. SONET) :
  • Average frequency of all clocks in the system are same or nearlyClocking provided by a highly stable reference supply

    Allows many STS- 1s to be stacked together without of bit- stuffing

    STS- 1s as well as VTs easily accessed from higher rates signals

    Pointers accommodate differences in reference source frequencies, and phase wander

    • STS-1: Synchronous Transport Signal – level 1
    • STM-1: Synchronous Transport Module – level 1
  • The telecommunications industry adopted the Synchronous Optical Network (SONET) or Synchronous Digital Hierarchy (SDH) standard for optical transport of TDM data. SONET, used in North America, and SDH, used elsewhere, are two closely related standards that specify interface parameters, rates, framing formats, multiplexing methods and management for synchronous TDM  over fiber.
  • SONET/SDH takes “n” bit streams, multiplexes them and optically modulates the signal, sending it out using a light emitting device over fiber with a bit rate equal to (incoming bit rate) multiplied by “n.” Traffic arriving at the SONET/SDH multiplexer from four places at 2.5 Gigabits per second will go out as a single stream at 4 times 2.5 Gigabits per second, or 10 Gigabits per second.  

 SONET/SDH Digital Hierarchy

Optical Level

SONET

Electrical Level

SDH

Equivalent

Line Rate (Mbps)

Payload Rate (Mbps)

Overhead Rate (Mbps)

SONET

Capacity

SDH

Capacity

OC-1

STS-1

51.840

50.112

1.728

28 DS-1s or

1 DS-3

21 E1s

OC-3

STS-3

STM-1

155.520

150.336

5.184

84 DS-1s or

3 DS3s

63 E1s or

1 E4

OC12

STS12

STM-4

622.080

601.344

20.736

336 DS-1s or

12 DS3s

252 E1s or

4 E4s

OC48

STS48

STM-16

2488.320

2405.376

82.944

1,344 DS-1s or

192 DS-3s

1,008 E1s or

16 E4s

OC-192

STS-192

STM-64

9953.280

9621.504

331.776

5,376 DS-1s or

192 DS-3s

4,032 E1s or

64 E4s

Although an SDH STM1 has the same bit rate as the SONET STS-3, the two signals contain different frame structures.

STM = Synchronous Transport Module (ITUT) STS  = Synchronous Transfer Signal (ANSI) OC  = Optical Carrier (ANSI)

 

 SONET/SDH Tributaries

Tributary Signal

Tributary Bit Rate

SONET Name

SDH Name

DS-1

1.728 Mbps

VT-1.5

TU11

E-1

2.304 Mbps

VT-2

TU12

DS-1C

3.456 Mbps

VT-3

DS-2

6.912 Mbps

VT-6

TU-2

E-3

49.152 Mbps

TU-3

Optical Transport Network (OTN)

ITU-T Recommendations on the OTN Transport Plane
The following table lists all of the known ITU-T Recommendations specifically related to the OTN Transport Plane.

Topic Title Publ.*
Definitions G.870 Definitions and Terminology for Optical Transport Networks (OTN 2004
Framework for Recommendations G.871/Y.1301 Framework for Optical Transport Network Recommendations 10/00
Architectural Aspects G.872 Architecture of Optical Transport Networks 11/01
G.872 Amend. 1 Architecture of Optical Transport Networks 12/03
G.872 Living List
Control Plane ASTN/ASON recommendations are moved to specific ASTN/ASON standards page
Structures Mapping G.709/Y.1331 Network node interface for the optical transport network (OTN) 03/03
G.709/Y.1331 Addendum 1 12/03
G.709 Living List
G.975 Forward Error Correction 10/00
Functional Characteristics G.681 Functional characteristics of interoffice long-haul line systems using optical amplifiers, including optical multiplexing 10/96
G.798 Characteristics of optical transport network (OTN) equipment functional blocks 01/02
G.798 Amendment 1 06/02
G.798 Living List
G.806 Characteristics of transport equipment – Description Methodology and Generic Functionality 10/00
G.7710/Y.1701 Common Equipment Management Requirements 11/01
Protection Switching
G.808.1 (G.gps) Generic protection switching – Linear trail and subnetwork protection 12/03
G.873.1 Optical Transport network (OTN) – Linear Protection 03/03
G.873.1 Errata 1 Optical Transport network (OTN) – Linear Protection 10/03
Management Aspects G.874 Management aspects of the optical transport network element 11/01
G.874.1 Optical Transport Network (OTN) Protocol-Neutral Management Information Model For The Network Element View 01/02
G.875 Optical Transport Network (OTN) management information model for the network element view
Data Communication Network (DCN) G.7712/Y.1703 Architecture and specification of data communication network 03/03
G.dcn living list
Error Performance G.8201 (G.optperf) Error performance parameters and objectives for multi-operator international paths within the Optical Transport Network (OTN) 09/03
G.optperf living list
M.2401 (M.24otn) Error Performance Limits and Procedures for Bringing-Into-Service and Maintenance of multi-operator international paths and sections within Optical Transport Networks 12/03
Jitter & Wander Performance G.8251(G.otnjit) The control of jitter and wander within the optical transport network (OTN) 11/01
G.8251 Amendment 1 The control of jitter and wander within the optical transport network (OTN) 06/02
G.8251 Corrigendum 1 The control of jitter and wander within the optical transport network (OTN) 06/02
Physical-Layer Aspects G.664 General Automatic Power Shut-Down Procedures for Optical Transport Systems 06/99
G.691 Optical Interfaces for single-channel SDH systems with Optical Amplifiers, and STM-64 and STM-256 systems 10/00
G.692 Optical Interfaces for Multichannel Systems with Optical Amplifiers 10/98
G.693 Optical interfaces for intra-office systems 11/01
G.694.1 Spectral grids for WDM applications: DWDM frequency grid 06/02
G.694.2 Spectral grids for WDM applications: CWDM wavelength grid 06/02
G.695 Optical interfaces for Coarse Wavelength Division Multiplexing applications 2003
G.696.1(G.IaDI) Intra-Domain DWDM applications 2004
G.697(G.optmon) Optical monitoring for DWDM system 2004
G.959.1 Optical Transport Networking Physical Layer Interfaces 02/01
Sup.39 (Sup.dsn) Optical System Design and Engineering Considerations 2003
Fibres G.651 Characteristics of a 50/125 um multipmode graded index optical fibre cable 02/98
G.652 Characteristics of a single-mode optical fibre cable 03/03
G.653 Characteristics of a dispersion-shifted single mode optical fibre cable 12/03
G.654 Characteristics of a cut-off shifted single-mode fibre cable 06/02
G.655 Characteristics of a non-zero dispersion shifted single-mode optical fibre cable 03/03
Components & Sub-systems G.661 Definition and test methods for the relevant generic parameters of optical amplifier devices and subsystems 10/98
G.662 Generic characteristics of optical fibre amplifier devices and subsystems 10/98
G.663 Application related aspects of optical fibre amplifier devices and sub-systems 04/00
G.671 Transmission characteristics of passive optical components 06/02

 

COMPARISON OF SDH AND OTN

1       Abbreviations

2       What is OTN/OTH

2.1        References

3       Optical transport network interface structure

3.1        Basic signal structure

3.1.1         OCh substructure

3.1.2         Full functionality OTM-n.m (n ≥ 1) structure

3.1.3         Reduced functionality OTM-nr.m and OTM-0.m structure

3.2        Information structure for the OTN interfaces

4       Multiplexing/mapping principles and bit rates

4.1        Mapping

4.2        Wavelength division multiplex

4.3        Bit rates and capacity

4.4        ODUk Time-Division Multiplex

5       OTUk, ODUk, OPUk Frame Structure

5.1        OPUk Overhead and Processing

5.1.1         Payload Structure Identifier (PSI)

5.1.2         Payload Type (PT)

5.2        ODUk Overhead and Processing

5.2.1         Path Monitoring (PM)

5.2.2         Tandem Connection Monitoring (TCM)

5.2.3         General Communication Channels (GCC1, GCC2)

5.2.4         Automatic Protection Switching and Protection Communication Channel (APS/PCC)

5.2.5         Fault Type and Fault Location reporting communication channel (FTFL)

5.3        OTUk Overhead and Processing

5.3.1         Scrambling

5.3.2         Frame Alignment Overhead

5.3.3         Section Monitoring (SM)

5.3.4         General Communication Channel 0 (GCC0)

6       OTN Maintenance Signals

6.1        OTUk maintenance signals

6.1.1         OTUk alarm indication signal (OTUk-AIS)

6.2        ODUk maintenance signals

6.2.1         ODUk Open Connection Indication (ODUk-OCI)

6.2.2         ODUk Locked (ODUk-LCK)

6.2.3         ODUk Alarm Indication Signal (ODUk-AIS)

6.3        Client maintenance signal

6.3.1         Generic AIS for constant bit rate signals

6.3.2         Client source ODUk-AIS

6.3.3         Client source ODUk-OCI

7       Defect detection

7.1        Default PLM (Payload Mismatch)

7.2        Default MSIM (Multiplex Structure Identifier Mismatch supervision)

7.3        Default LOFLOM (Loss of Frame and Multi-frame)

8       Synchronization

8.1        Introduction

8.2        Network requirements

9       Why use OTN

9.1        Forward Error Correction (FEC)

9.2        Tandem Connection Monitoring

9.3        Transparent Transport of Client Signals

9.4        Switching Scalability

 

1       Abbreviations

This document uses the following abbreviations:

0xYY                 YY is a value in hexadecimal presentation

3R                    Re-amplification, Reshaping and Retiming

ACT                  Activation (in the TCM ACT byte)

AI                     Adapted Information

AIS                    Alarm Indication Signal

APS                   Automatic Protection Switching

BDI                   Backward Defect Indication

BEI                    Backward Error Indication

BIAE                  Backward Incoming Alignment Error

BIP                    Bit Interleaved Parity

CBR                  Constant Bit Rate

CI                     Characteristic Information

CM                    Connection Monitoring

CRC                  Cyclic Redundancy Check

DAPI                 Destination Access Point Identifier

EXP                   Experimental

ExTI                  Expected Trace Identifier

FAS                   Frame Alignment Signal

FDI                   Forward Defect Indication

FEC                   Forward Error Correction

GCC                  General Communication Channel

IaDI                  Intra-Domain Interface

IAE                   Incoming Alignment Error

IrDI                   Inter-Domain Interface

JOH                  Justification Overhead

LSB                   Least Significant Bit

MFAS                 Multi-Frame Alignment Signal

MFI                   Multi-frame Indicator

MS                    Maintenance Signal

MSB                   Most Significant Bit

MSI                   Multiplex Structure Identifier

NNI                   Network Node Interface

OCh                  Optical channel with full functionality

OCI                   Open Connection Indication

ODU                  Optical Channel Data Unit

ODUk                Optical Channel Data Unit-k

ODTUjk             Optical channel Data Tributary Unit j into k

ODTUG              Optical channel Data Tributary Unit Group

ODUk-Xv            X virtually concatenated ODUk’s

OH                    Overhead

OMS                  Optical Multiplex Section

OMS-OH                        Optical Multiplex Section Overhead

OMU                  Optical Multiplex Unit

ONNI                 Optical Network Node Interface

OOS                  OTM Overhead Signal

OPS                   Optical Physical Section

OPU                  Optical Channel Payload Unit

OPUk                Optical Channel Payload Unit-k

OPUk-Xv            X virtually concatenated OPUk’s

OSC                  Optical Supervisory Channel

OTH                  Optical Transport Hierarchy

OTM                  Optical Transport Module

OTN                  Optical Transport Network

OTS                  Optical Transmission Section

OTS                  -OH Optical Transmission Section Overhead

OUT                  Optical Channel Transport Unit

OTUk                Optical Channel Transport Unit-k

PCC                  Protection Communication Channel

PM                    Path Monitoring

PMI                   Payload Missing Indication

PMOH                Path Monitoring OverHead

Ppm                  parts per million

PRBS                 Pseudo Random Binary Sequence

PSI                    Payload Structure Identifier

PT                    Payload Type

RES                   Reserved for future international standardization

RS                     Reed-Solomon

SAPI                  Source Access Point Identifier

Sk                     Sink

SM                    Section Monitoring

SMOH                Section Monitoring OverHead

So                     Source

TC                    Tandem Connection

TCM                  Tandem Connection Monitoring

TS                     Tributary Slot

TxTI                 Transmitted Trace Identifier

UNI                   User-to-Network Interface

VCG                  Virtual Concatenation Group

VCOH                Virtual Concatenation Overhead

vcPT                 virtual concatenated Payload Type

 

2       What is OTN/OTH

The Optical Transport Hierarchy (OTH) is a new transport technology for the OTN developed by the ITU. It is based on the network architecture defined in ITU G.872 “Architecture for the Optical Transport Network (OTN)”.

G.872 defines an architecture that is composed of the Optical Channel (OCh), Optical Multiplex Section (OMS) and Optical Transmission Section (OTS). It then describes the functionality that is needed to make OTN work. However, it may be interesting to note the decision made during G.872 development:

“During the development of ITU-T Rec. G.709, (implementation of the Optical Channel Layer according to ITU-T Rec. G.872 requirements), it was realized that the only techniques presently available that could meet the requirements for associated OCh trace, as well as providing an accurate assessment of the quality of a digital client signal, were digital techniques….”

“For this reason ITU-T Rec. G.709 chose to implement the Optical Channel by means of a digital framed signal with digital overhead that supports the management requirements for the OCh. Furthermore this allows the use of Forward Error Correction for enhanced system performance. This results in the introduction of two digital layer networks, the ODU and OTU. The intention is that all client signals would be mapped into the Optical Channel via the ODU and OTU layer networks.”

Currently there are no physical implementations of the OCh, OMS and OTS layers. As they are defined and implemented, they will be included in this document.

2.1      References

ITU-T Rec. G.709 (2009)            “Interfaces for the Optical Transport Network (OTN)”

ITU-T Rec. G.798 (2010)            “Characteristics of optical transport network hierarchy equipment functional blocks”

ITU-T Rec. G.872 (2001)            “Architecture of optical transport networks”

ITU-T Rec. G.873.1 (2006)         “Optical Transport Network: Linear protection”

ITU-T Rec. G.874 (2010)            “Management aspects of the optical transport network element”

ITU-T Rec. G.874.1 (2002)         “Optical transport network: Protocol-neutral management information model for the network element view”

ITU-T Rec. G.959.1 (2009)         “Optical transport network physical layer interfaces”

ITU-T Rec. G.8251 (2011)          “The control of jitter and wander within the optical transport network (OTN)”

 

3       Optical transport network interface structure

3.1      Basic signal structure

Figure 1 Structure of the OTN interfaces

3.1.1     OCh substructure

The optical channel layer is further structured in layer networks in order to support the network management and supervision functionalities:

  • The optical channel with full (OCh) or reduced functionality (OChr), which provides transparent network connections between 3R regeneration points in the OTN.
  • The optical channel transport unit (OTUk/OTUkV) which provides supervision and conditions the signal for transport between 3R regeneration points in the OTN.

 

  • The optical channel data unit (ODUk) which provides:
  • tandem connection monitoring (ODUkT)
  • end-to-end path supervision (ODUkP)
  • adaptation of client signals via the optical channel payload unit (OPUk)
  • adaptation of OTN ODUk signals via the optical channel payload unit (OPUk)

3.1.2     Full functionality OTM-n.m (n ≥ 1) structure

The OTM-n.m (n ≥ 1) consists of the following layers:

  • optical transmission section (OTSn)
  • optical multiplex section (OMSn)
  • optical channel (OCh)
  • optical channel transport unit (OTUk/OTUkV)
  • one or more optical channel data unit (ODUk)

3.1.3     Reduced functionality OTM-nr.m and OTM-0.m structure

The OTM-nr.m and OTM-0.m consist of the following layers:

  • optical physical section (OPSn)
  • reduced functionality optical channel (OChr)
  • optical channel transport unit (OTUk/OTUkV)
  • one or more optical channel data unit (ODUk)

3.2      Information structure for the OTN interfaces

Figure 2 Principal information containment relationships

The following layers are defined in OTN:

  • OPUk: Optical channel payload unit k (k = 0, 1, 2, 3, 4)
  • ODUk: Optical channel data unit k (k = 0, 1, 2, 3, 4)
  • OTUk: Optical channel transport unit k (k = 1, 2, 3, 4)
  • OCh: Optical channel, a single wavelength
  • OMSn: Optical multiplex section of order n (Capacities for n = 0 and n = 16 are defined)
  • OTSn: Optical transmission section of order n (Capacities for n = 0 and n = 16 are defined)

 

  • OTM-n.m: Optical transport module of rate m with n optical channels. Possible values for m are:

1: 2.5 Gb/s

2: 10 Gb/s

3: 40 Gb/s

4: 100 Gb/s

Figure 3 shows how they are being used in a network.

Figure 3 OTN Network Layers

However for all intents and purposes there are only 4 layers

Figure 4 OTN Hierarchy

The OPUk, ODUk, and OTUk are in the electrical domain. The OCh is in the optical domain. There are more layers in the optical domain than just the OCh, but they are not being used now.

4       Multiplexing/mapping principles and bit rates

Figure 5 shows the relationship between various information structure elements and illustrates the multiplexing structure and mappings (including wavelength and time division multiplexing) for the OTM-n.

Figure 5 OTM multiplexing and mapping structure

The OTS, OMS, OCh and COMMS overhead is inserted into the OOS.

4.1      Mapping

The OPUk encapsulates the Client signal (e.g. SDH) and does any rate justification that is needed. It is analogous to the path layer in SDH in that it is mapped at the source, de-mapped at the sink, and not modified by the network.

The OPUk is mapped into an ODUk. The ODUk performs similar functions as the path overhead in SDH.

The ODUk is mapped into an OTUk[V]. The OTUk[V] contains the FEC and performs similar functions as the section overhead in SDH.

After the FEC are added, the signal is then sent to a serializer/ deserializer to be converted to the optical domain. The OTUk[V] is mapped into an OCh[r] and the OCh[r] is then modulated onto an OCC[r].

4.2      Wavelength division multiplex

Up to n (n ≥ 1) OCC[r] are multiplexed into an OCG-n[r].m using wavelength division multiplexing. The OCC[r] tributary slots of the OCG-n[r].m can be of different size.

The OCG-n[r].m is transported via the OTM-n[r].m. For the case of the full functionality OTM-n.m interfaces the OSC is multiplexed into the OTM-n.m using wavelength division multiplexing.

4.3      Bit rates and capacity

The data rates were constructed so that they could transfer SDH and Ethernet signals efficiently. The bit rates are shown in the following tables:

Table 1 OTU types and capacity

Table 2 ODU types and capacity

Table 3 OPU types and capacity

 

4.4      ODUk Time-Division Multiplex

Figure 6 and Figure 7 show the relationship between various information structure elements and illustrate the multiplexing structure and mappings (including wavelength and time division multiplexing) for the OTM-n. In the multi-domain OTN any combination of the ODUk multiplexing layers may be present at a given OTN interface.

Figure 6 shows that a (non-OTN) client signal is mapped into a lower order OPU, identified as “OPU (L)”. The OPU (L) signal is mapped into the associated lower order ODU, identified as “ODU (L)”. The ODU (L) signal is either mapped into the associated OTU[V] signal, or into an ODTU. The ODTU signal is multiplexed into an ODTU Group (ODTUG). The ODTUG signal is mapped into a higher order OPU, identified as “OPU (H)”. The OPU (H) signal is mapped into the associated higher order ODU, identified as “ODU (H)”. The ODU (H) signal is mapped into the associated OTU[V].

The OPU (L) and OPU (H) are the same information structures, but with different client signals. The concepts of lower order and high order ODU are specific to the role that ODU plays within a single domain.

Figure 6 OTM multiplexing and mapping structure

 

Figure 7 shows that an OTU[V] signal is mapped either into an optical channel signal, identified as OCh and OChr, or into an OTLk.n. The OCh/OChr signal is mapped into an optical channel carrier, identified as OCC and OCCr. The OCC/OCCr signal is multiplexed into an OCC group, identified as OCG-n.m and OCG-nr.m. The OCG-n.m signal is mapped into an OMSn. The OMSn signal is mapped into an OTSn. The OTSn signal is presented at the OTM-n.m interface.

The OCGnr.m signal is mapped into an OPSn. The OPSn signal is presented at the OTM-nr.m interface.

A single OCCr signal is mapped into an OPS0. The OPS0 signal is presented at the OTM-0.m interface. The OTLk.n signal is mapped into an optical transport lane carrier, identified as OTLC. The OTLC signal is multiplexed into an OTLC group, identified as OTLCG. The OTLCG signal is mapped into an OPSMnk. The OPSMnk signal is presented at the OTM-0.mvn interface.

Figure 7 OTM multiplexing and mapping structure

 

5       OTUk, ODUk, OPUk Frame Structure

Figure 8 shows the overall frame format for an OTUk signal. The various fields will be explained in the following sub-sections.

Figure 8 OTN frame format

 

5.1      OPUk Overhead and Processing

The OPUk (k = 1,2,3) frame structure is organized in an octet-based block frame structure with 4 rows and 3810 columns.

Figure 9 OPUk frame structure

The two main areas of the OPUk frame are:

  • OPUk overhead area
  • OPUk payload area

Columns 15 to 16 of the OPUk are dedicated to OPUk overhead area.

Columns 17 to 3824 of the OPUk are dedicated to OPUk payload area.

NOTE – OPUk column numbers are derived from the OPUk columns in the ODUk frame

OPUk OH information is added to the OPUk information payload to create an OPUk. It includes information to support the adaptation of client signals. The OPUk OH is terminated where the OPUk is assembled and disassembled.

 

 

Figure 10 OPUk frame

5.1.1     Payload Structure Identifier (PSI)

The 256-byte PSI signal is aligned with the ODUk multi-frame (i.e. PSI[0] is present at ODUk multi-frame position 0000 0000, PSI[1] at position 0000 0001, PSI[2] at position 0000 0010, etc.).

PSI[0] contains a one-byte payload type. PSI[1] to PSI[255] are mapping and concatenation specific.

5.1.2     Payload Type (PT)

A one-byte payload type signal is defined in the PSI[0] byte of the payload structure identifier to indicate the composition of the OPUk signal.

 

5.2      ODUk Overhead and Processing

The ODUk (k = 1,2,3) frame structure is organized in an octet-based block frame structure with 4 rows and 3824 columns.

Figure 11 ODUk frame structure

The three main areas of the ODUk frame are:

  • OTUk area
  • ODUk overhead area;
  • OPUk area.

Columns 1 to 14 of rows 2-4 are dedicated to ODUk overhead area.

Columns 1 to 14 of row 1 are reserved for frame alignment and OTUk specific overhead.

Columns 15 to 3824 of the ODUk are dedicated to OPUk area.

ODUk OH information is added to the ODUk information payload to create an ODUk. It includes information for maintenance and operational functions to support optical channels. The ODUk OH consists of portions dedicated to the end-to-end ODUk path and to 6 levels of tandem connection monitoring. The ODUk path OH is terminated where the ODUk is assembled and disassembled. The TC OH is added and terminated at the source and sink of the corresponding tandem connections, respectively.

Figure 12 ODUk overhead

 

5.2.1     Path Monitoring (PM)

Figure 13 ODUk path monitoring overhead

5.2.1.1     Trail Trace Identifier (TTI)

The TTI is a 64-Byte signal that occupies one byte of the frame and is aligned with the OTUk multi-frame. It is transmitted 4 times per multi-frame.

5.2.1.2     BIP-8

This byte provides a bit interleaved parity-8 (BIP-8) code.

The ODUk BIP-8 is computed over the bits in the OPUk (columns 15 to 3824) area of ODUk frame i, and inserted in the ODUk PM BIP-8 overhead location in the ODUk frame i+2.

5.2.1.3     Backward Defect Indication (BDI)

This is defined to convey the “Signal Fail” status detected at the path terminating sink function, to the upstream node.

5.2.1.4     Backward Error Indication and Backward Incoming Alignment Error (BEI/BIAE)

This signal is used to convey in the upstream direction the count of interleaved-bit blocks that have been detected in error by the corresponding ODUk path monitoring sink using the BIP-8 code. This count has nine legal values, namely 0-8 errors. The remaining seven possible values represented by these 4 bits can only result from some unrelated condition and are interpreted as 0 errors.

5.2.1.5     Path Monitoring Status (STAT)

They indicate the presence of a maintenance signal.

5.2.2     Tandem Connection Monitoring (TCM)

There are 6 TCM’s. They can be nested or overlapping.

Figure 14 ODUk tandem connection monitoring #i overhead

5.2.2.1     Trail Trace Identifier (TTI)

The TTI is a 64-Byte signal that occupies one byte of the frame and is aligned with the OTUk multi-frame. It is transmitted four times per multi-frame.

5.2.2.2     BIP-8

This byte provides a bit interleaved parity-8 (BIP-8) code.

Each ODUk BIP-8 is computed over the bits in the OPUk (columns 15 to 3824) area of ODUk frame i, and inserted in the ODUk TCM BIP-8 overhead location (associated with the tandem connection monitoring level) in ODUk frame i+2.

The BIP-8 is only overwritten at the start of a Tandem Connection. Any existing TCM is not overwritten.

5.2.2.3     Backward Defect Indication (BDI)

This is defined to convey the “Signal Fail” status detected at the path terminating sink function, to the upstream node.

5.2.2.4     Backward Error Indication and Backward Incoming Alignment Error (BEI/BIAE)

This signal is used to convey in the upstream direction the count of interleaved-bit blocks that have been detected as being in error by the corresponding ODUk tandem connection monitoring sink using the BIP-8 code. It is also used to convey in the upstream direction an incoming alignment error (IAE) condition that is detected in the corresponding ODUk tandem connection monitoring sink in the IAE overhead.

During an IAE condition the code “1011” is inserted into the BEI/BIAE field and the error count is ignored. Otherwise the error count (0-8) is inserted into the BEI/BIAE field. The remaining 6 possible values represented by these 4 bits can only result from some unrelated condition and are interpreted as 0 errors and BIAE not active.

5.2.2.5     TCM Monitoring Status (STAT)

For each tandem connection monitoring field three bits are defined as status bits (STAT). They indicate the presence of a maintenance signal (if there is an incoming alignment error at the source TCM, or if there is no source TCM active).

5.2.2.6     Tandem Connection Monitoring ACTivation/deactivation (TCM-ACT)

Its definition is for further study.

5.2.3     General Communication Channels (GCC1, GCC2)

Two fields of two bytes are allocated in the ODUk overhead to support two general communications channels between any two network elements with access to the ODUk frame structure (i.e., at 3R regeneration points). These are clear channels. The bytes for GCC1 are located in row 4, columns 1 and 2, and the bytes for GCC2 are located in row 4, columns 3 and 4 of the ODUk overhead.

5.2.4     Automatic Protection Switching and Protection Communication Channel (APS/PCC)

Up to 8 levels of nested APS/PCC signals may be present in this field. The APS/PCC bytes in a given frame are assigned to a dedicated level depending on the value of MFAS as follows:

MFAS bit 678 APS/PCC channel applies to connection level Protection scheme using the APS/PCC channel
000 ODUk Path ODUk SNC/N
001 ODUk TCM1 ODUk SNC/S, ODUk SNC/N
010 ODUk TCM2 ODUk SNC/S, ODUk SNC/N
011 ODUk TCM3 ODUk SNC/S, ODUk SNC/N
100 ODUk TCM4 ODUk SNC/S, ODUk SNC/N
101 ODUk TCM5 ODUk SNC/S, ODUk SNC/N
110 ODUk TCM6 ODUk SNC/S, ODUk SNC/N
111 OTUk Section ODUk SNC/I

Table 4 Multi-frame to allow separate APS/PCC for each monitoring level

For linear protection schemes, the bit assignments for these bytes and the bit-oriented protocol are given in ITU-T Recommendation G.873.1. Bit assignment and byte oriented protocol for ring protection schemes are for further study.

5.2.5     Fault Type and Fault Location reporting communication channel (FTFL)

One byte is allocated in the ODUk overhead to transport a 256-byte fault type and fault location (FTFL) message. The byte is located in row 2, column 14 of the ODUk overhead.

 

5.3      OTUk Overhead and Processing

The OTUk (k = 1,2,3) frame structure is based on the ODUk frame structure and extends it with a forward error correction (FEC). 256 columns are added to the ODUk frame for the FEC and the overhead bytes in row 1, columns 8 to 14 of the ODUk overhead are used for OTUk specific overhead, resulting in an octet-based block frame structure with 4 rows and 4080 columns.

The OTUk forward error correction (FEC) contains the Reed-Solomon RS(255,239) FEC codes. If no FEC is used, fixed stuff bytes (all-0s pattern) are inserted.

OTUk OH information is part of the OTUk signal structure. It includes information for operational functions to support the transport via one or more optical channel connections. The OTUk OH is terminated where the OTUk signal is assembled and disassembled.

Figure 15 OTUk Overhead

Figure 16 OTUk overhead

5.3.1     Scrambling

The OTUk signal needs sufficient bit timing content to allow a clock to be recovered. A suitable bit pattern, which prevents a long sequence of “1”s or “0”s, is provided by using a scrambler.

The operation of the scrambler is functionally identical to that of a frame synchronous scrambler of sequence length 65535 operating at the OTUk rate.

The generating polynomial is 1 + x + x3+ x12 + x16.

The scrambler is reset to “FFFF” (HEX) on the most significant bit of the byte following the last framing byte in the OTUk frame, i.e. the MSB of the MFAS byte. This bit and all subsequent bits to be scrambled are added modulo 2 to the output from the x16 position of the scrambler. The scrambler runs continuously throughout the complete OTUk frame. The framing bytes (FAS) of the OTUk overhead are not scrambled.

Scrambling is performed after FEC check bytes computation and insertion into the OTUk signal.

5.3.2     Frame Alignment Overhead

5.3.2.1     Frame Alignment Signal (FAS)

A 6 byte OTUk-FAS signal is defined in row 1, columns 1 to 6 of the OTUk overhead.

OA1 is “1111 0110”. OA2 is “0010 1000”.

Figure 17 Frame alignment signal overhead structure

5.3.2.2     Multi-Frame Alignment Signal (MFAS)

Some of the OTUk and ODUk overhead signals span multiple OTUk/ODUk frames. A single multi-frame alignment signal (MFAS) byte is defined in row 1, column 7 of the OTUk/ODUk overhead The value of the MFAS byte will be incremented each OTUk/ODUk frame and provides as such a 256 frame multi-frame.

Individual OTUk/ODUk overhead signals use this central multi-frame to lock their 2-frame, 4-frame, 8-frame, 16-frame, 32-frame, etc. multi-frames to the principal frame.

5.3.3     Section Monitoring (SM)

Figure 18 OTUk section monitoring overhead

5.3.3.1     Trail Trace Identifier (TTI)

The TTI is a 64-Byte signal that occupies one byte of the frame and is aligned with the OTUk multi-frame. It is transmitted four times per multi-frame.

5.3.3.2     BIP-8

This byte provides a bit interleaved parity-8 (BIP-8) code.

The OTUk BIP-8 is computed over the bits in the OPUk (columns 15 to 3824) area of OTUk frame i, and inserted in the OTUk BIP-8 overhead location in OTUk frame i+2.

Note: The OPUk includes the Justification Bytes, thus an OTN signal can not be retimed without de-mapping back to the client signal.

5.3.3.3     Backward Defect Indication (BDI)

This is defined to convey the “Signal Fail” Status detected at the Section Terminating Sink Function, to the upstream node.

5.3.3.4     Backward Error Indication and Backward Incoming Alignment Error (BEI/BIAE)

This signal is used to convey in the upstream direction the count of interleaved-bit blocks that have been detected in error by the corresponding OTUk section monitoring sink using the BIP-8 code. It is also used to convey in the upstream direction an incoming alignment error (IAE) condition that is detected in the corresponding OTUk section monitoring sink in the IAE overhead.

During a IAE condition the code “1011” is inserted into the BEI/BIAE field and the error count is ignored. Otherwise the error count (0-8) is inserted into the BEI/BIAE field. The remaining six possible values represented by these four bits can only result from some unrelated condition and are interpreted as 0 errors and BIAE not active.

5.3.3.5     Incoming Alignment Error (IAE)

A single-bit incoming alignment error (IAE) signal is defined to allow the ingress point to inform its peer egress point that an alignment error in the incoming signal has been detected.

IAE is set to “1” to indicate a frame alignment error; otherwise it is set to “0”.

The egress point may use this information to suppress the counting of bit errors, which may occur as a result of a frame phase change of the OTUk at the ingress of the section.

5.3.4     General Communication Channel 0 (GCC0)

Two bytes are allocated in the OTUk overhead to support a general communications channel between OTUk termination points. This is a clear channel. These bytes are located in row 1, columns 11 and 12 of the OTUk overhead.

 

6       OTN Maintenance Signals

6.1      OTUk maintenance signals

6.1.1     OTUk alarm indication signal (OTUk-AIS)

The OTUk-AIS is a generic-AIS signal. Since the OTUk capacity (130 560 bits) is not an integer multiple of the PN-11 sequence length (2047 bits), the PN-11 sequence may cross an OTUk frame boundary.

The PN-11 sequence is defined by the generating polynomial 1 + x9 + x11.

6.2      ODUk maintenance signals

Three ODUk maintenance signals are defined: ODUk-OCI, ODUk-LCK and ODUk-AIS.

6.2.1     ODUk Open Connection Indication (ODUk-OCI)

ODUk-OCI is specified as a repeating “0110 0110” pattern in the entire ODUk signal, excluding the frame alignment overhead (FA OH) and OTUk overhead (OTUk OH).

The presence of ODUk-OCI is detected by monitoring the ODUk STAT bits in the PM and TCMi overhead fields.

The insertion of this is under management control. There is no defect that inserts ODUk-OCI.

6.2.2     ODUk Locked (ODUk-LCK)

ODUk-LCK is specified as a repeating “0101 0101” pattern in the entire ODUk signal, excluding the Frame Alignment overhead (FA OH) and OTUk overhead (OTUk OH).

The presence of ODUk-LCK is detected by monitoring the ODUk STAT bits in the PM and TCMi overhead fields.

The insertion of this is under management control. There is no defect that inserts ODUk-LCK.

6.2.3     ODUk Alarm Indication Signal (ODUk-AIS)

ODUk-AIS is specified as all “1”s in the entire ODUk signal, excluding the frame alignment overhead (FA OH), OTUk overhead (OTUk OH) and ODUk FTFL.

The presence of ODUk-AIS is detected by monitoring the ODUk STAT bits in the PM and TCMi overhead fields.

ODUk-AIS is generated if the OTUk input signal fails or it detects ODUk-OCI or ODUk-LCK on the input signal.

6.3      Client maintenance signal

6.3.1     Generic AIS for constant bit rate signals

The generic-AIS signal is a signal with a 2047-bit polynomial number 11 (PN-11) repeating sequence.

The PN-11 sequence is defined by the generating polynomial 1 + x9 + x11.

During a signal fail condition of the incoming CBR2G5, CBR10G or CBR40G client signal (e.g. in the case of a loss of input signal), this failed incoming signal is replaced by the generic-AIS signal, and is then mapped into the OPUk.

During signal fail condition of the incoming ODUk/OPUk signal (e.g. in the case of an ODUk-AIS, ODUk-LCK, ODUk-OCI condition) the generic-AIS pattern is generated as a replacement signal for the lost CBR2G5, CBR10G or CBR40G signal.Maintenance Signal Insertion

6.3.2     Client source ODUk-AIS

During a signal fail condition of the incoming ODUj client signal (e.g. OTUj-LOF), this failed incoming signal will be replaced by the ODUj-AIS signal. This ODUj-AIS is then mapped into the respective timeslot in the ODUk.

6.3.3     Client source ODUk-OCI

For the case the ODUj is received from the output of a fabric (ODUj connection function), the incoming signal may contain (case of open matrix connection) the ODUj-OCI signal This ODUj-OCI signal is then mapped into the respective timeslot in the ODUk.

Not all equipment will have a real connection function (switch fabric) implemented; instead the presence/absence of tributary interface port units represents the presence/absence of a matrix connection.

If such unit is intentionally absent or not installed, the associated timeslot in the ODUk shall carry an ODUj-OCI signal.

If such unit is installed but temporarily removed as part of a repair action, the associated timeslot in the ODUk shall carry an ODUj-AIS signal.

 

7       Defect detection

There are no defects detected in the multiplexer. There are defects detected in the de-multiplexer.

7.1      Default PLM (Payload Mismatch)

Default PLM is declared if the accepted payload type (AcPT) is not equal to the expected payload type(s) as defined by the specific adaptation function. Default PLM is cleared if the accepted payload type is equal to the expected payload type(s).

A new payload type PT (AcPT) is accepted if a new consistent value is received in the PSI[0] byte in 3 consecutive multi-frames.

7.2      Default MSIM (Multiplex Structure Identifier Mismatch supervision)

Default MSIM is declared if the accepted MSI (AcMSI) is not equal to the expected multiplex structure identifier (ExMSI). dMSIM is cleared if the AcMSI is equal to the ExMSI. ExMSI is configured via the management interface. A new multiplex structure identifier MSI (AcMSI) is accepted if a new consistent value is received in the MSI bytes of the PSI overhead (PSI[2…5] for ODU2, PSI[2…17] for ODU3) in 3 consecutive multi-frames.

7.3      Default LOFLOM (Loss of Frame and Multi-frame)

If the frame alignment process is in the out-of-frame (OOF) state for 3 ms, default LOFLOM is declared. To prevent from the case of intermittent OOFs, the integrating timer is reset to 0 until an in-frame (IF) condition persists continuously for 3 ms. Default LOFLOM is cleared when the IF state persists continuously for 3 ms.

The ODUj frame and multi-frame alignment is found by searching for the framing pattern (OA1, OA2 FAS bytes) and checking the multi-frame sequence (MFAS byte) contained in the ODUj frame.

In the out-of-frame state the framing pattern searched for is the full set of the OA1 and OA2 bytes. The in-frame (IF) is entered if this set is found and confirmed one frame period later and an error-free multi-frame sequence is found in the MFAS bytes of the two frames.

In the in-frame state (IF) the frame alignment signal is continuously checked with the presumed frame start position and the expected multi-frame sequence. The framing pattern checked for is the OA1OA2 pattern (bytes 3 and 4 of the first row of the ODUj[/i] frame). The out of frame state (OOF) is entered if this subset is not found at the correct position in 5 consecutive frames or the received MFAS does not match with the expected multi-frame number in 5 consecutive frames.

The frame and multi-frame start are maintained during the OOF state.

There is one of these defects for each tributary.

 

8       Synchronization

8.1      Introduction

OTN is transparent to the payload it transports within the ODUk. The OTN layer does not need to transport network synchronization since network synchronization can be transported within the payload, mainly by SDH client tributaries.

Two types of mapping have been specified for the transport of CBR payload, e.g. SDH.

The first one is the asynchronous mapping, which is the most widely used, where the payload floats within the OTN frame. In this case, there is no frequency relationship between the payload and the OTN frame frequencies, thus simple free running oscillators can be used to generate the OTN frame.

The second is the synchronous mapping where the timing used to generate the OTN frame is extracted from a CBR client tributary, e.g. SDH. In case of LOS of the input client, the OTN frequency that does not transport payload is generated by a free running oscillator, without need for a holdover mode.

This specification allows for very simple implementation of timing in OTN equipments compared to SDH.

An OTN NE does not require synchronization interfaces, complex clocks with holdover mode nor SSM processing. Another difference with SDH is that there is no geographical option for the timing aspects of OTN.

OTN transports client signals into a G.709 frame, OTUk that is transported by an OCh on one lambda of the Optical Transport Module (OTM). Each lambda carries its G.709 frame with its own frequency; there is no common clock for the different OTUk of the OTM.

A trail through OTN is generated in an OTN NE that maps the client into an ODUk and terminated in another OTN NE that de-maps the client signal from the ODUk. Between the 2 OTN trail terminations, there might be 3R regenerators, which are equipments that perform complete regeneration of the pulse shape, clock recovery and retiming within required jitter limits.

The number of 3R regenerators that can be cascaded in tandem depends on the specification of this regenerator and on the jitter and wander generation and tolerance applicable to the OTUk interfaces; it is stated to be at least 50.

ODUk multiplexing has been standardized; its implication on timing has been taken into account in the relevant recommendations.

8.2      Network requirements

In an OTN, jitter and wander accumulate on transmission path according to the generation and transfer characteristics of interconnected equipments, 3R regenerators, client mappers, de-mappers and multiplexers, de-multiplexers. In order to avoid the effects of excessive jitter and wander, the ITU-T Recommendation G.8251 recommendation specifies the maximum magnitude of jitter and wander, and the minimum jitter and wanders tolerance, at OTN network interfaces.

The OTN generates and accumulates jitter and wander on its client signals due to the buffers of the mapping into ODUk and due to the ODUk multiplexing. The limits for such accumulation are given in the ITU-T Recommendation G.825 for SDH signal clients.

Jitter and wander is also accumulated on the OTN signals itself due to the ODUk multiplexing and 3R jitter generation. The network limits for this are given in the ITU-T Recommendation G.8251.

 

The ITU-T Recommendation G.8251 specifies the jitter and wander tolerance. As OTN clocks do not generate wander, no wander limit has been defined for OTN.

The ITU-T Recommendation G.8251 specifies the different type of clocks that are required to perform the following functions: the accuracy of these clocks depends on the definition of the G.709 frame and on the accuracy specified for the clients.

  • Asynchronous mapping of a client into an ODUk and ODUk multiplexing: this ODCa clock is a free- running clock with a frequency accuracy of ± 20 ppm.
  • Synchronous mapping of a client into an ODUk: this ODCb clock is locked on the client frequency.
  • 3R regeneration: this ODCr clock is locked on an OCh input frequency which must be within ± 20 ppm.
  • De-mapping a client signal from an ODUk and ODUk de-multiplexing: this ODCp clock is locked on an OCh input frequency which must be within ± 20 ppm.

The ITU-T Recommendation G.8251 specifies the jitter generation of these clocks and, when applicable, noise tolerance, jitter transfer and transient response.

All these clock functions are used for clock recovery and clock filtering of a particular signal. They never serve as an equipment synchronization source. Therefore there is no holdover mode specified for these clocks since there is no need for an accurate clock when the input signal disappears.

The ITU-T Recommendation G.8251 provides a provisional adaptation of the SDH synchronization reference chain to include OTN islands. This is an amendment of the reference chain being defined in the ITU-T Recommendation G.803. Considering that SDH may be transported by OTN islands, the SEC will no longer be present but replaced by OTN NEs. This leads to the definition of a reference chain where all SECs located between 2 SSUs are replaced by an OTN island. The local part of the reference chain, after the last SSU can still support 20 SECs in tandem. Each of these islands may be composed of OTN NEs performing mapping/de-mapping or multiplexing/de-multiplexing operations. This adaptation of the reference chain raises a buffer size constraint for the OTN NEs in order to keep the overall network wander performance within specified limits. Predominantly the mapping and the de-mapping functions of the OTN contribute to wander accumulation due to the buffers being involved in these functions. The size limit of these buffers is specified in the ITU-T Recommendation G.798. This allows inserting up to 10 mapping/ multiplexing nodes per OTN island. A total of 100 mapping/de-mapping functions can be performed on this synchronization reference chain.

The ITU-T Recommendation G.8251 presents a Hypothetic Reference Model for 3R regenerator jitter accumulation: according to this model, at any OTUk interface the jitter will remain within network limits in a chain of one mapping clock and up to 50 cascaded 3R regenerators plus a de-mapping clock. It reports the results of extensive simulations showing that it is possible to have 50 OTN regenerators without exceeding the network limits of OTUk interfaces, assuming the regenerators comply with the model defined in this Recommendation.

 

9       Why use OTN

OTN offers the following advantages relative to SDH:

  • Stronger Forward Error Correction (FEC)
  • More levels of Tandem Connection Monitoring (TCM)
  • Transparent transport of client signals
  • Switching scalability

OTN has the following disadvantages:

  • Requires new hardware and management system

9.1      Forward Error Correction (FEC)

Forward error correction is a major feature of the OTN.

Already SDH has a FEC defined. It uses undefined SOH bytes to transport the FEC check information and is therefore called an in-band FEC. It allows only a limited number of FEC check information, which limits the performance of the FEC.

For the OTN a Reed-Solomon 16 byte-interleaved FEC scheme is defined, which uses 4 x 256 bytes of check information per ODU frame.

Figure 19 Error correction in OTN

According to ITU-T Recommendation G.709, an Reed-Solomon (255, 239) code with a symbol size of 8 is used for FEC. 239 input bytes are encoded in 255 output bytes. This code enables the detection of 2t = (n – k) = 16 errors in a codeword and the correction of t = (n – k)/2 = 8 of them.

FEC has been proven to be effective in OSNR limited systems as well as in dispersion limited systems. As for non-linear effects, reducing the output power leads to OSNR limitations, against which FEC is useful. FEC is less effective against PMD, however.

 

G.709 defines a stronger Forward Error Correction for OTN that can result in up to 6,2 dB improvement in Signal to Noise Ratio (SNR). Another way of looking at this is that to transmit a signal at a certain Bit Error Rate (BER) with 6,2 dB less power than without such a FEC.

The coding gain provided by the FEC can be used to:

  • Increase the maximum span length and/or the number of spans, resulting in an extended reach. (Note that this assumes that other impairments like chromatic and polarization mode dispersion are not becoming limiting factors.)
  • Increase the number of DWDM channels in a DWDM system which is limited by the output power of the amplifiers by decreasing the power per channel and increasing the number of channels. (Note that changes in non-linear effects due to the reduced per channel power have to be taken into account.)
  • Relax the component parameters (e.g. launched power, eye mask, extinction ratio, noise figures, and filter isolation) for a given link and lower the component costs.
  • but the most importantly the FEC is an enabler for transparent optical networks:

Transparent optical network elements like OADMs introduce significant optical impairments (e.g. attenuation). The number of transparent optical network elements that can be crossed by an optical path before 3R regeneration is needed is therefore strongly limited. With FEC an optical path can cross more transparent optical network elements.

This allows evolving from today’s point-to-point links to transparent, meshed optical networks with sufficient functionality.

 

9.2      Tandem Connection Monitoring

SDH monitoring is divided into section and path monitoring. A problem arises when you have “Carrier’s Carrier” situation where it is required to monitor a segment of the path that passes another carrier network.

Figure 20 Tandem Connection Monitoring

Here Operator A needs to have Operator B carries his signal. However he also needs a way of monitoring the signal as it passes through Operator B’s network. This is what a “Tandem connection” is. It is a layer between Line Monitoring and Path Monitoring. SDH was modified to allow a single Tandem connection. G.709 allows 6.

TCM1 is used by the User to monitor the Quality of Service (QoS) that they see. TCM2 is used by the first operator to monitor their end-to-end QoS. TCM3 is used by the various domains for Intra domain monitoring. Then TCM4 is used for protection monitoring by Operator B.

There is no standard on which TCM is used by whom. The operators have to have an agreement, so that they don’t conflict.

TCM’s also support monitoring of ODUk (G.709 w/o FEC) connections for one or more of the following network applications (refer to ITU-T G.805 and ITU-T G.872):

  • optical UNI to UNI tandem connection monitoring; monitoring the ODUk connection through the public transport network (from public network ingress network termination to egress network termination);
  • optical NNI to NNI tandem connection monitoring; monitoring the ODUk connection through the network of a network operator (from operator network ingress network termination to egress network termination);
  • sub-layer monitoring for linear 1+1, 1:1 and 1:n optical channel sub-network connection protection switching, to determine the signal fail and signal degrade conditions;
  • sub-layer monitoring for optical channel shared protection ring (SPRING) protection switching, to determine the signal fail and signal degrade conditions;
  • Monitoring an optical channel tandem connection for the purpose of detecting a signal fail or signal degrade condition in a switched optical channel connection, to initiate automatic restoration of the connection during fault and error conditions in the network;
  • Monitoring an optical channel tandem connection for, e.g., fault localization or verification of delivered quality of service.

A TCM field is assigned to a monitored connection. The number of monitored connections along an ODUk trail may vary between 0 and 6. Monitored connections can be nested, overlapping and/or cascaded.

Figure 21 ODUk monitored connections

Monitored connections A1-A2/B1-B2/C1-C2 and A1-A2/B3-B4 are nested, while B1-B2/B3-B4 are cascaded.

 

Overlapping monitored connections are also supported.

Figure 22 Overlapping ODUk monitored connections

9.3      Transparent Transport of Client Signals

G.709 defines the OPUk which can contain the entire SDH signal. This means that one can transport 4 STM-16 signals in one OTU2 and not modify any of the SDH overhead.

Thus the transport of such client signals in the OTN is bit-transparent (i.e. the integrity of the whole client signal is maintained).

It is also timing transparent. The asynchronous mapping mode transfers the input timing (asynchronous mapping client) to the far end (asynchronous de-mapping client).

It is also delay transparent. For example if 4 STM-16 signals are mapped into ODU1’s and then multiplexed into an ODU2, their timing relationship is preserved until they are de-mapped back to ODU1’s.

9.4      Switching Scalability

When SDH was developed, its main purpose was to provide the transport technology for voice services. Two switching levels were therefore defined. A low order switching at VC-12 level supports directly the E1 voice signals and a high order switching level at VC-4 level is used for traffic engineering. Switching levels at higher bit rates were not foreseen.

 

Over time the line rate increased while the switching rate was fixed. The gap between line rate and switching bit rate widened. Furthermore new services at higher bit rates (IP, Ethernet services) had to be supported.

Contiguous and virtual concatenation were introduce in order to solve part of the services problem as they allow to support services above the standard SDH switching bit rates.

The gap between line or service bit rate and switching bit rate however still exists as even with concatenation switching is performed at the VC-4 level.

For a 4 x 10G to 40G SDH multiplexer this means processing of 256 VC-4. This will result not only in efforts in the equipment hardware, but also in management and operations efforts.

For efficient equipment and network design and operations, switching at higher bit rates has to be introduced.

One could now argue that photonic switching of wavelengths is the solution. But with photonic switching the switching bit rate is bound to the bit rate of the wavelength and as such would be the service. An independent selection for service bit rates and DWDM technology is not possible.

A operator offering 2,5 Gbit/s IP interconnection would need a N x 2G5 DWDM system. When adding 10G services he has to upgrade some of its wavelengths to 10G. This would lead to inefficient network designs.

OTN provides the solution to the problem by placing no restrictions on switching bit rates. As the line rate grows new switching bit rates are added.

An operator can offer services at various bit rates (2G5, 10G …) independent of the bit rate per wavelength using the multiplexing and inverse multiplexing features of the OTN.

 

 OTN ALARM FLOW

SOME MORE PICTURES RELATED TO OTN
OTN GALLERY

 

 

 

 

 

 

 

 

 

 

>> What is PMD versus Differential Group Delay (DGD)?

Polarization Mode Dispersion (PMD) is the average Differential Group Delay (DGD) one expects to see when measuring an optical fiber.
DGD is the time separation or delay between the two principle polarization modes of the transmission link at the receiver.
DGD is an instantaneous event and varies randomly with wavelength and time. This means that DGD is a statistical parameter, obeys the law of probability theory and thus has uncertainty associated with it.
PMD is the average value of a distribution of a large number of independent DGD measurements.

>> What is fiber PMD versus cable PMD?

Fiber PMD is PMD that is generally measured on spool, while cable PMD would be the PMD of the fibers in a cabled/installed mode.
The spool measurement is not an accurate indicator of cabled and installed PMD. Many studies have found that there is no correlation between the two.
Other measurement techniques such as a Low Mode Coupling (LMC) measurement have been developed to make this correlation. However, LMC is destructive and time consuming and thus, predominately performed on a sampling basis.
Fiber customers should insist on data from the cable manufacturer that establishes this correlation and provides a Link Design Value (LDV) needed for their application.

>> What is a Link Design Value?

Link Design Value (LDV) is a useful design parameter for calculating the worst-case contribution of the fiber toward the overall system PMD of a link.
LDV, also referred to as PMDQ,is a term developed in standard bodies used to evaluate the impact of fiber related PMD where cabled fibers are deployed in concatenated sections. This is, in building a network typically 2-10 km sections of cable are spliced together.
The LDV is the worst case PMD of the end-to-end link made up of randomly chosen cable sections spliced together. Thus the LDV represents the worst case PMD of a fiber path in a cabled and deployed span.
PMD standards indicate that LDV should be calculated from nominally 20 to 24 sections and have a maximum cumulative distribution Q of nominally 0.001 to 0.0001. This implies that 0.1% or 0.01% of all spans (made up of concatenated sections) would be above this level of PMD.

>> What is system PMD?

System PMD is the total PMD attributed to the collection of optical components that makes up an optical link between a transmitter and a receiver. Many of these components are illustrated in the following figure.
9-20-2012 10-35-48 AM
Indeed, optical fiber may only be one of the components if amplification, power splitting, dispersion compensation or optical multiplexing is used in the link.
With PMD performance of cabled optical fibers continuing to decrease, more emphasis will be place on the PMD performance of the other components in the link.
Typically, in a long haul system, the PMD attributed to the optical fiber itself is given half of the system PMD budget. The system PMD is calculated from the square root of the sum of the squares of each individual component PMD.

>> How much PMD can be tolerated in my system?

The level of system PMD that can be tolerated depends on data rate, distance and how much system outage one is willing to tolerate.
The following figure indicates the average system outage as a function of system PMD for 10, 40 and 80 Gbps systems.
Figure-2
As can be seen in this figure, as the system PMD limit for a given bit rate is approached, slight increases in PMD will cause significant increases in system outage.
International standard guidelines for PMD recommend an outage probability of 6.5 x 10-8, corresponding to 2 seconds of outage per year attributable to PMD.
Coupled with the 50% recommendation of system PMD that should be allocated for optical fiber, a goal for a 10 Gbps system would be a cabled optical fiber with ~ 4 ps of total PMD. This drops to below 1 ps if a 40 or 80 Gbps system is envisioned operating on this fiber in the future.
The PMD requirement on your cabled fiber is now only dependent on knowing the distance of the route. For example if you have a 100 km link at 10 Gbps, then your cabled fiber should have a LDV below 0.4 ps/sqrt(km).
For longer distances, the LDV requirements become even more stringent such that at 1000 km, a 10 Gbps system would require a cabled fiber LDV of < 0.13 ps/sqrt(km) while a 40 Gbps system would need a value < 0.03 ps/sqrt(km).
Errors induced by PMD may be difficult to isolate. Since PMD is statistical in nature and varies with wavelength and time, errors can appear in random, frequent burst or long durations.
Errors may appear later in the life of a link as the cable plant ages and environmental factors change its performance. There are a host of external factors that can influence the PMD behavior of a link, from seasonal changes in temperature, to the effects of wind, to vibrations from proximity to railroad tracks or large building equipment.

>> How do I guard against the transient nature of PMD?

The best way to guard against the transient nature of PMD is to use a fiber that has been manufactured with a process that both lowers PMD and provides PMD stability with environmental changes.

>> Can I fix or compensate for PMD?

Yes, but it is generally an added cost since it must be done on a per channel basis.
There are two ways to compensate for PMD, optically or electronically.
Optical PMD compensation is done by splitting the signal into two polarization states, actively measuring the PMD of a signal and adjusting an optical delay line to retard or advance one of the polarizations. These techniques were generally very expensive and never were widely deployed.
Electronic PMD compensation can be done using adaptive processing techniques, but have limited range of compensation and are difficult to scale to higher data rates, such as at 40 Gbps. It is best to minimize your needs for compensation by installing low PMD optical fiber and components.

>> Are there standards for PMD?

yes, there are several documents that establish guidelines for everything from measurement of PMD to methods to determine LDV to how to concatenate components for a complete system view of PMD.
There is the IEC 60793-3 that describes the statistical specification of PMD for optical fiber cables, the IEC 61282-3 that has guidelines for calculations of PMD in fiber optic systems.
Recommended levels of PMD in cabled fiber can be found in ITU-T G.652 and ITU-T G.655 for non-dispersion shifted fiber and non-zero dispersion fiber, respectively.
Typically, these are minimal requirements for PMD performance.

>> There is so much ambiguity in PMD specifications. What should I be looking for?

The most important specification one can look for is cabled fiber link design value (LDV), also referred to as quadrature average PMD (PMDQ) in standards bodies, such as ITU.
Purchased optical fiber is cabled, installed and concatenated with other cables along a route, so it makes sense to review a specification that best represents the PMD performance of deployed fiber.
Cabled fiber LDV does just that and is straightforward to use in a system design analysis. It is the worst-case representation of the optical fiber’s contribution to system PMD.

>> What should I ask of my fiber cable supplier in terms of a PMD specification?

We recommend that the customer insist on data from the cable manufacturer that establishes a correlation between fiber PMD and cable PMD and that provides a cabled fiber LDV value to meet the applications’ needs.
Since PMD may have some dependencies on cable construction also, we suggest PMD specifications that most closely represent the cable type of interest.
Quality assurance programs must also be in place to ensure that data reported in the specifications is repeatable and therefore meaningful.

>> Is the sensitivity to PMD, fiber design – or manufacturer dependent?

Much of the sensitivity of a fiber to PMD is due to the manufacturing process. There is some variation from fiber type to fiber type due to design, but low PMD is generally achieved by quality and process control during fiber manufacturing.

>> How do I test an incoming cable or fiber for PMD?

It is difficult to check PMD before installation. Fiber measurements on spool are notoriously unreliable. Loose coils (greater than 30 cm diameter) are one approach, though many fiber samples need to be measured before one can assess the PMD.
Cable measurements present the same difficulties. Short cable lengths (less than 10 km) can be measured on-reel, but again, many measurements are required.
The cable manufacturer should be able to give an estimate of the change in PMD expected when the cable is taken off the reel. It is suggested that the test set be capable of measuring a PMD value below 0.01ps.

>> Why is the minimum measurable PMD length dependent?

The “PMD coefficient”, with units of ps/km1/2, indicates the rate at which PMD builds up along the fiber length.
For a fixed length, a “PMD value”, with units of ps, can be measured. Test sets have a minimum measurable PMD value. For fiber with a  low PMD coefficient, a long fiber length is required for accurate measurement.
For example, if a 0.02 ps/km1/2 fiber is measured using an interferometric test set with a minimum PMD of 0.1 ps, one would requires 25 km of fiber.

>> What are the limitations to measurement of low PMD?

It has been shown some time ago that all PMD measurement methods have an associated uncertainty depending on the bandwidth of the test set and the size of the PMD value being measured.
This uncertainty, U, takes the mathematical form:
image
where Δω is the instrument bandwidth and “PMD” is the correct PMD value.
For the low values of PMD in fiber, this uncertainty is substantial, even for very wide instrument bandwidths (> 100 nm at 1550nm). A simple calculation shows that with a 120 nm bandwidth (typical of the Jones Matrix Eigenanalysis test set, the IEC referenced standard and most widely accepted test method) a measurement of a 0.1 ps PMD value (corresponding to 25 km of good fiber mentioned earlier) will result in a 33% uncertainty.
This uncertainty can only be overcome by varying something other than wavelength when the measurement is made (other choices are temperature or fiber position).
In any case, more than one measurement is needed if you want to measure low values with small uncertainty.

>> What is a “low mode coupled” PMD measurement?

Realizing that the PMD measurement of fiber on the spool or cable on the reel is very inaccurate, thought has been given to a more representative configuration that will reflect that performance of installed cable.
Current standards (IEC 60793-1-48) suggest using a large diameter drum, loose coil, or flat surface for fiber or cable layout. We have found that the most reliable method is using the flat surface.
For fibers that have little mode coupling (such as unspun fiber), all configurations involving bending, dramatically increase mode coupling and produce an erroneously low PMD value.
On the other hand, fibers that are spun and highly mode coupled, tend to show higher PMD when coiled. In either case, correct answers are obtained using the flat surface.
The low mode coupled measurements have the disadvantage that only short lengths of fiber can be used. In this case, the instrument bandwidth limitations become important. For this reason, all low mode coupled measurements require some method of enhanced sampling, such as variation of fiber temperature or position in order to actually measure PMD.

>> Is it possible to measure PMD with an OTDR?

No, a traditional optical time domain reflectometer (OTDR) only measures loss and optical reflections. However, there are newly available polarization-OTDRs (POTDR) that can isolate and measure high levels of PMD along the length of a route.

>> I am not running high bit rates, like 10 or 40 Gbps, now, should I be concerned about PMD?

The timeframe for getting to 10 Gbps data rates or beyond and the length of the time the fiber is expected to last, determines the need for concern over high PMD.
Once fiber is purchased and installed, compensation if needed, can prove costly, complex or limiting. Many network builds are intended to be in service for well over twenty years.
Over the last ten years, data rates have increased a hundred fold as personal computers have increased in processing speed. Looking forward, as Moore’s law driven improvements in microelectronics speed continue and optical transmission component costs decrease, transmission rates of 40 Gbps and beyond will be reasonable expectations and PMD will be important.

>> Is there any correlation between chromatic dispersion and PMD?

No, chromatic dispersion is a result of materials used in the core of the optical fiber as well as the waveguide design. PMD is a result of imperfections in the circular uniformity of the core as a result of a host of internal or external effects such as eccentricity, ovality, core/cladding defects, external bends or pressure, etc.

>> Should I worry about PMD in a multimode fiber?

In general, multimode fiber is only used on very short lengths (< 1km) and at lower data rates (< 10 Gbps) such that PMD for these types of optical fibers is not of concern.
check this link for more

source:http://www.fiberoptics4sale.com/wordpress/what-is-wavelength-selective-switchwss/

1. What Is a Wavelength Selective Switch (WSS)?

JDSU Wavelength Selective Switch WSS
WSS stands for Wavelength Selective Switch. WSS has become the central heart of modern DWDM reconfigurable Agile Optical Network (AOC).
WSS can dynamically route, block and attenuate all DWDM wavelengths within a network node. The following figure shows WSS’s functionality.
Wavelength Selective Switch (WSS) Functionality
The above figure shows that a WSS consists of a single common optical port and N opposing multi-wavelength ports where each DWDM wavelength input from the common port can be switched (routed) to any one of the N multi-wavelength ports, independent of how all other wavelength channels are routed.
This wavelength switching (routing) process can be dynamically changed through a electronic communication control interface on the WSS. So in essence, WSS switches DWDM channels or wavelengths.
There are also variable attenuation mechanism in WSS for each wavelength. So each wavelength can be independently attenuated for channel power control and equalization.

2. How Does a WSS Work?

There are several WSS switching engine technologies on the market today, here we will demonstrate a MEMS based design. The different switching technologies will be discussed in the next section.
A) 1X2 Configuration
The following figure shows a diffraction grating and MEMS based 1×2 wavelength selective switch.
Diffraction Grating Based Wavelength Selective Switch WSS
The light from a fiber is collimated by a lens with focal length f and demultiplexed by diffraction off the grating.
The direction of the beam after the grating will depend on the wavelength λ0 of the beam. The diffracted beams then pass through the lens for a second time, and the spectrally resolved light is focused on the reflective linear MEMS device, which is also referred as 1D (1 dimension) MEMS device. The MEMS device then either changes the amplitude (attenuate) or the direction of the beam.
The reflected light passes through the lens and is wavelength-multiplexed by diffraction off the grating, and finally the lens couples the light back into the fiber.  The output light is separated from the input light by a circulator.
B) 1XN Wavelength Selective Switch
The 1XN switch can be considered as a generalization of the 1×2 switch. Because every wavelength in the 1XN switch can be switched to any one of the N output ports, this switch can be used in a fully flexible OADM (Optical Add Drop Multiplexer) with multiple add/drop fiber ports, each of which carries single or multiple wavelengths.
1XN switches can be cascaded to form larger architectures, and NxN wavelength selective matrix can be built by interconnecting back-to-back 1XN switches.
Let’s look at the optical design of the 1XN wavelength selective switch (WSS).
Optical Design of 1xN Wavelength Selective Swtich WSS
In the 1xN switch design, it uses an additional lens in Fourier transform configuration to perform a space to angle conversion in the first stage of the switch. Also the 1xN switch will require tilt mirrors with N different tilt angles. These are usually implemented as analog mirrors.
Here is how the design works.
  1. The common input fiber enters the switch at point A where light is collimated by a microlens.
  2. The following lens image the collimated beam on the diffraction grating at point C.
  3. The wavelength dispersed beams fall then onto the MEMS device plane D
  4. On MEMS device plane D, the beams are reflected with certain tilt angle depending on micromirrors’ setting.
  5. All reflected beams are focused on point B again, where the angle to space conversion section will image the beam on the output fiber. Each output corresponds to a specific tilt angle of the micromirrors.
This MEMS based switch can switch as many as 128 wavelengths with 50 GHz spacing. The total insertion loss is less than 6 dB. It uses a 100mm focal length mirror and a 1100 lines/mm grating. The micromirrors can be actuated by +/- 8° using a voltage of <115V and the switch can be used as variable attenuator by detuning the tilt angle of the micromirrors.

3. WSS Switching Engine Technologies

The optical design we discussed in the previous section is based on MEMS micromirrors. Here we will discuss several more switching engine technologies.
A. MEMS Switching Engine
Micromirror array is fabricated in silicon, using wafer scale lithographic processes leveraged from semiconductor industry.
MEMS based WSS Switch
MEMS based WSS Switch
image
When voltage is applied to electrode, it causes mirror to tilt due to electrostatic attraction. Attenuation is provided by tilting to offset the beam slightly at the output fiber. A angle-to-offset lens converts beam tilt to beam displacement at an input/output fiber array.
MEMS based WSS Switch Working Principle
The advantage is that offset perpendicular to wavelength dispersion direction gives attenuation with no change in channel shape as shown below.
MEMS based WSS gives attenuation with no change in shape
The tradeoffs are between mechanical resonance frequency (hinge stiffness), driving voltage, and tilt angle often result in high driving voltages.
B. Liquid Crystal (LC) Switching Engine Principle
Liquid crystal cell selectively controls the polarization state of transmitted light by application of a control voltage as shown below.
Liquid Crystal based Wavelength Selective Switch Working Principle
For the switching process to work, the liquid crystal cell (LC) must be followed by a polarization dependent optical element such as a PBS (Polarization Beam Splitter) to change the path of the transmitted light based on their polarization.
Liquid Crystal LC based Wavelength Selective Switch Working Principle
Random polarized input must be separated into two orthogonal polarizations.
Random polarized input must be separated into two orthogonal polarizations.
In a binary switching configuration, N liquid crystal (LC) cells can select among 2N output ports. And an extra liquid crystal (LC) cell and polarizer can be used to provide attenuation.
In a binary switching configuration, N liquid crystal (LC) cells can select among 2N output ports. And an extra liquid crystal (LC) cell and polarizer can be used to provide attenuation.
C. LCoS (Liquid Crystal on Silicon)
The following two graphics show Liquid Crystal on Silicon (LCoS) technology and the optical design of an wavelength selective switch based on LCoS switching technology.
A LCoS-based switch engine built uses an array of phase controlled pixels to implement beam steering by creating a linear optical phase retardation in the direction of the intended deflection.
Illustration of a typical LCoS pixelated phase steering array
LCOS is a display technology which combines Liquid Crystal and semiconductor technologies to create a high resolution, solid-state display engine. In WSS design, the LCoS is used to control the phase of light at each pixel to produce an electrically-programmable grating. This can control the beam deflection in a vertical direction by varying either the pitch or blaze of the grating whilst the width of the channel is determined by the number of pixel columns selected in the horizontal direction.
In the WSS design, it incorporates polarization diversity, control of mode size and a 4-f wavelength optical imaging in the dispersive axis of the LCoS providing integrated switching and optical power control.
In operation, the light passes from a fiber array through the polarization diversity optics which both separates and aligns the orthogonal polarization states to be in the high efficiency s-polarization state of the diffraction grating. The light from the input fiber is reflected from the imaging mirror and then angularly dispersed by the grating, reflecting the light back to the cylindrical mirror which directs each optical frequency (wavelength) to a different portion of the LCoS. The path for each wavelength is then retraced upon reflection from the LCoS, with the beam-steering image applied on the LCoS directing the light to a particular port of the fiber array.
Finisar WaveShaper Optical Schematic
Shematic of LCOS Structure

4. Switch Engine Technologies and Minimum Achievable Spot Sizes

Some switching engine technologies require a minimum beam size to function properly, and thus place a limit on the minimum optical system length for a give channel passband width, channel spacing, and dispersive element.
The advantages of a small optical system size include a smaller overall module footprint, greater functional density, lower packing cost, and greater tolerance to mechanical shock and environmental conditions.
The following figure shows the minimum spot size for various switch engine technologies.
the minimum spot size for various switch engine technologies

Patch Cords

The buffer or jacket on patchcords is often color-coded to indicate the type of fiber used. The strain relief “boot” that protects the fiber from bending at a connector is color-coded to indicate the type of connection. Connectors with a plastic shell (such as SC connectors) typically use a color-coded shell. Standard color codings for jackets and boots (or connector shells) are shown below:
Buffer/jacket color Meaning
Yellow single-mode optical fiber
Orange multi-mode optical fiber
Aqua 10 gig laser-optimized 50/125 micrometer multi-mode optical fiber
Grey outdated color code for multi-mode optical fiber
Blue Sometimes used to designate polarization-maintaining optical fiber
Connector Boot Meaning Comment
Blue Physical Contact (PC), 0° mostly used for single mode fibers; some manufacturers use this for polarization-maintaining optical fiber.
Green Angle Polished (APC), 8°
Black Physical Contact (PC), 0°
Grey, Beige Physical Contact (PC), 0° multimode fiber connectors
White Physical Contact (PC), 0°
Red High optical power. Sometimes used to connect external pump lasers or Raman pumps.

Fiber Optics

Understanding the Basics
Nothing has changed the world of communications as much as the development and implementation of optical fiber. This article provides the basic principles needed to work with this technology.

Optical fibers are made from either glass or plastic. Most are roughly the diameter of a human hair, and they may be many miles long. Light is transmitted along the center of the fiber from one end to the other and a signal may be imposed. Fiber optic systems are superior to metallic conductors in many applications. Their greatest advantage is bandwidth. Because of the wavelength of light, it is possible to transmit a signal which contains considerably more information than is possible with a metallic conductor — even a coaxial conductor. Other advantages include:

• Electrical Isolation — Fiber optics do not need a grounding connection. Both the transmitter and the receiver are isolated from each other and are therefore free of ground loop problems. Also, there is no danger of sparks or electrical shock.

• Freedom from EMI — Fiber optics are immune to electromagnetic interference (EMI), and they emit no radiation themselves to cause other interference.

• Low Power Loss — This permits longer cable runs and fewer repeater amplifiers.

• Lighter and Smaller — Fiber weighs less and needs less space than metallic conductors with equivalent signal-carrying capacity.

Copper wire is about 13 times heavier. Fiber also is easier to install and requires less duct space.

Applications 

Some of the major application areas of optical fibers are:

• Communications — Voice, data and video transmission are the most common uses of fiber optics, and these include:

– Telecommunications
– Local area networks (LANs)
– Industrial control systems
– Avionic systems
– Military command, control and communications systems

• Sensing — Fiber optics can be used to deliver light from a remote source to a detector to obtain pressure, temperature or spectral information. The fiber also can be used directly as a transducer to measure a number of environmental effects, such as strain, pressure, electrical resistance and pH. Environmental changes affect the light intensity, phase and/or polarization in ways that can be detected at the other end of the fiber.

• Power Delivery — Optical fibers can deliver remarkably high levels of power for such tasks as laser cutting, welding, marking and drilling.

• Illumination — A bundle of fibers gathered together with a light source at one end can illuminate areas that are difficult to reach — for example, inside the human body, in conjuction with an endoscope. Also, they can be used as a display sign or simply as decorative illumination.

Construction 

An optical fiber consists of three basic concentric elements: the core, the cladding and the outer coating (Figure 1).

Figure1.gif


Figure 1. An optical fiber consists of a core, cladding and coating.


The core is usually made of glass or plastic, although other materials are sometimes used depending on transmission spectrum desired.

The core is the light transmitting portion of the fiber. The cladding usually is made of the same material as the core, but with a slightly lower index of refraction (usually about one percent lower). This index difference causes total internal reflection to occur at the index boundary along the length of the fiber so that the light is transmitted down the fiber and does not escape through the side walls.

The coating usually comprises one or more coats of a plastic material to protect the fiber from the physical environment. Sometimes metallic sheaths are added to the coating for further physical protection.

Optical fibers usually are specified by their size, given as the outer diameter of the core, cladding and coating. For example, a 62.5/125/250 would refer to a fiber with a 62.5-µm diameter core, a 125-µm diameter cladding and a 0.25-mm outer coating diameter.

Principles

Optical materials are characterized by their index of refraction, referred to as n. A material’s index of refraction is the ratio of the speed of light in a vacuum to the speed of light in the material. When a beam of light passes from one material to another with a different index of refraction, the beam is bent (or refracted) at the interface (Figure 2).

Figure2.gif


Figure 2. A beam of light passing from one material to another of a different index of refraction is bent or refracted at the interface.


Refraction is described by Snell’s law:

Equation1.gif

where nI and nR are the indices of refraction of the materials through which the beam is refracted and I andR are the angles of incidence and refraction of the beam. If the angle of incidence is greater than the critical angle for the interface (typically about 82° for optical fibers), the light is reflected back into the incident medium without loss by a process known as total internal reflection (Figure 3).

Figure3.gif


Figure 3. Total internal reflection allows light to remain inside the core of the fiber.


Modes

When light is guided down a fiber (as microwaves are guided down a waveguide), phase shifts occur at every reflective boundary. There is a finite discrete number of paths down the optical fiber (known as modes) that produce constructive (inphase and therefore additive) phase shifts that reinforce the transmission. Because each of these modes occurs at a different angle to the fiber axis as the beam travels along the length, each one travels a different length through the fiber from the input to the output. Only one mode, the zero-order mode, travels the length of the fiber without reflections from the sidewalls. This is known as a single-mode fiber. The actual number of modes that can be propagated in a given optical fiber is determined by the wavelength of light and the diameter and index of refraction of the core of the fiber.

Attenuation

Signals lose strength as they are propagated through the fiber: this is known as beam attenuation. Attenuation is measured in decibels (dB) with the relation:

Equation2.gif

where Pin and Pout refer to the optical power going into and coming out of the fiber. The table below shows the power typically lost in a fiber for several values of attenuation in decibels.

Sidebar1.gifThe attenuation of an optical fiber is wavelength dependent. At the extremes of the transmission curve, multiphoton absorption predominates. Attenuation is usually expressed in dB/km at a specific wavelength. Typical values range from 10 dB/km for step-index fibers at 850 nm to a few tenths of a dB/km for single-mode fibers at 1550 nm.

There are several causes of attenuation in an optical fiber:

• Rayleigh Scattering — Microscopic-scale variations in the index of refraction of the core material can cause considerable scatter in the beam, leading to substantial losses of optical power. Rayleigh scattering is wavelength dependent and is less significant at longer wavelengths. This is the most important loss mechanism in modern optical fibers, generally accounting for up to 90 percent of any loss that is experienced.

• Absorption — Current manufacturing methods have reduced absorption caused by impurities (most notably water in the fiber) to very low levels. Within the bandpass of transmission of the fiber, absorption losses are insignificant.

Figure4.gif


Figure 4. Numerical aperture depends on the angle at which rays enter the fiber and the diameter of the fiber’s core.


• Bending — Manufacturing methods can produce minute bends in the fiber geometry. Sometimes these bends will be great enough to cause the light within the core to hit the core/cladding interface at less than the critical angle so that light is lost into the cladding material. This also can occur when the fiber is bent in a tight radius (less than, say, a few centimeters). Bend sensitivity is usually expressed in terms of dB/km loss for a particular bend radius and wavelength.

Numerical aperture

Numerical aperture (NA), shown in Figure 4, is the measure of maximum angle at which light rays will enter and be conducted down the fiber. This is represented by the following equation:

Equation3.gif

Dispersion 

As the optical pulses travel the length of the fiber, they are broadened or lengthened in time. This is called dispersion. Because the pulses eventually will become so out of step that they begin to overlap each other and corrupt the data, dispersion sets an upper limit on the data-carrying capabilities of a fiber. There are three principal causes for this broadening:

• Chromatic Dispersion — Different wavelengths travel at different velocities down the fiber. Because typical light sources provide power over a series or range of wavelengths, rather than from a single discrete spectral line, the pulses must spread out along the length of the fiber as they proceed. The high-speed lasers used in communications have very narrow spectral output specifications, greatly reducing the effect of chromatic dispersion.

• Modal Dispersion — Different fiber modes reflect at different angles as they proceed down the fiber. Because each modal angle produces a somewhat different path length for the beam, the higher order modes reach the output end of the fiber behind the lower order modes. • Waveguide Dispersion — This minor cause for dispersion is due to the geometry of the fiber and results in different propagation velocities for each of the modes.

Bandwidth

Bandwidth measures the data-carrying capacity of an optical fiber and is expressed as the product of the data frequency and the distance traveled (MHz-km or GHz-km typically). For example, a fiber with a 400-MHz-km bandwidth can transmit 400 MHz for a distance of 1 km or it can transmit 20 MHz of data for 20 km. The primary limit on bandwidth is pulse broadening, which results from modal and chromatic dispersion of the fiber. Typical values for different types of fiber follow:

Sidebar2.gifPower transmission

The amount of power that a fiber can transmit (without being damaged) is usually expressed in terms of the maximum acceptable power density. Power density is the product of the maximum power output of the laser and the area of the laser beam. For example, a 15-W laser beam focused onto a 150-µm diameter spot produces a power density of

Equation4.gif

The output of a pulsed laser (typically specified in millijoules of energy per pulse) must first be converted to power per pulse. For example, a pulsed laser that produces 50 mJ in a 10-ns pulse provides an output power of

Equation5.gif

The power density then can be calculated from the spot size.

To transmit the absolute maximum energy levels down a fiber, the fiber end faces must be absolutely smooth and polished and be perpendicular to the fiber axis and the light beam. Also, the beam diameter should be no greater than approximately one-half of the area of the core (or the diameter of the core). If the beam is not appropriately focused, some of the energy may spill into the cladding, which quickly can damage polymer-clad silica fibers. For this reason it is better to use silica-clad silica fibers in higher power density applications.

Fiber types

There are basically three types of optical fiber: single-mode, multimode graded-index and multimode step-index. They are characterized by the way light travels down the fiber and depend on both the wavelength of the light and the mechanical geometry of the fiber. Examples of how they propagate light are shown in Figure 5.

Single-mode

Only the fundamental zero-order mode is transmitted in a single-mode fiber. The light beam travels straight through the fiber with no reflections from the core-cladding sidewalls at all. Single-mode fiber is characterized by the wavelength cut-off value, which is dependent on core diameter, NA and wavelength of operation. Below the cut-off wavelength, higher order modes may also propagate, which changes the fiber’s characteristics.

Figure5.gif


Figure 5. Modes of fiber transmission.


Because the single-mode fiber propagates only the fundamental mode, modal dispersion (the primary cause of pulse overlap) is eliminated. Thus, the bandwidth is much higher with a single-mode fiber than that of a multimode fiber. This simply means that pulses can be transmitted much closer together in time without overlap. Because of this higher bandwidth, single-mode fibers are used in all modern long-range communication systems. Typical core diameters are between 5 and 10 µm.

The actual number of modes that can be propagated through a fiber depends on the core diameter, the numerical aperture, and the wavelength of the light being transmitted. These may be combined into the normalized frequency parameter or V number,

Equation6.gif

where a is the core radius, λ is the wavelength and n is the index of the core and the cladding. The condition for single-mode operation is that

Equation7.gif

Perhaps more important and useful is the cut-off wavelength. This is the wavelength below which the fiber will allow propagation of multiple modes and can be expressed as:

Equation8.gif

A fiber is typically chosen with a cut-off wavelength slightly below the desired operating wavelength. For lasers typically used as sources (with output wavelengths between 850 and 1550 nm) the core diameter of a single-mode fiber is in the range of 3 to 10 µm.

Multimode graded-index

The core diameters of multimode fibers are much larger than single-mode fibers. As a result, higher order modes also are propagated.

The core in a graded-index fiber has an index of refraction that radially decreases continuously from the center to the cladding interface. As a result, the light travels faster at the edge of the core than in the center. Different modes travel in curved paths with nearly equal travel times. This greatly reduces modal dispersion in the fiber.

As a result, graded-index fibers have bandwidths which are significantly greater than step-index fibers, but still much lower than single-mode fibers. Typical core diameters of graded-index fibers are 50, 62.5 and 100 µm. The main application for graded-index fibers is in medium-range communications, such as local area networks.

Multimode step-index

The core of a step-index fiber has a uniform index of refraction right up to the cladding interface where the index changes in a step-like fashion. Because different modes in a step-index fiber travel different path lengths in their journey through the fiber, data transmission distances must be kept short to avoid considerable modal dispersion problems.

Step-index fibers are available with core diameters of 100 to 1500 µm. They are well suited to applications requiring high power densities, such as medical and industrial laser power delivery.

OSI 7 LAYER MODEL
The OSI, or Open System Interconnection, model defines a networking framework for implementing protocols in seven layers. Control is passed from one layer to the next, starting at the application layer in one station, proceeding to the bottom layer, over the channel to the next station and back up the hierarchy.
Easy Way to Remember the OSI 7 Layer Model

Application(Layer 7) This layer supports application and end-user processes. Communication partners are identified, quality of service is identified, user authentication and privacy are considered, and any constraints on data syntax are identified. Everything at this layer is application-specific. This layer provides application services for file transfers, e-mail, and other network software services.

Presentation(Layer 6) This layer provides independence from differences in data representation (e.g., encryption) by translating from application to network format, and vice versa. This layer formats and encrypts data to be sent across a network, providing freedom from compatibility problems. It is sometimes called the syntax layer.

Session(Layer 5) This layer establishes, manages and terminates connections between applications. The session layer sets up, coordinates, and terminates conversations, exchanges, and dialogues between the applications at each end. It deals with session and connection coordination.

Transport(Layer 4) This layer provides transparent transfer of data between end systems, or hosts, and is responsible for end-to-end error recovery and flow control. It ensures complete data transfer.

Network(Layer 3) This layer provides switching and routing technologies, creating logical paths, known as virtual circuits, for transmitting data from node to node. Routing and forwarding are functions of this layer, as well as addressing, internetworking, error handling, congestion control and packet sequencing.

Data Link(Layer 2) At this layer, data packets are encoded and decoded into bits. It furnishes transmission protocol knowledge and management and handles errors in the physical layer, flow control and frame synchronization. The data link layer is divided into two sublayers: The Media Access Control (MAC) layer and the Logical Link Control (LLC) layer. The MAC sublayer controls how a computer on the network gains access to the data and permission to transmit it. The LLC layer controls frame synchronization, flow control and error checking.

Physical(Layer 1) This layer conveys the bit stream – electrical impulse, light or radio signal — through the network at the electrical and mechanical level. It provides the hardware means of sending and receiving data on a carrier, including defining cables, cards and physical aspects.


OSI Layer Model for concentrators

Hubs/Repeaters are found in the Physical Layer

Switches /Bridges/Wireless Access Point are found in the Data Link Layer

Routers are found in the Network Layer

Gateway are found in All 7 of the OSI Layers

Brouter are found in both the Data Link and Network Layer 

OSI OSI 7 Layer Model
7. Application Layer – DHCP, DNS, FTP, HTTP, IMAP4, NNTP, POP3, SMTP, SNMP, SSH, TELNET and NTPmore)
6. Presentation layer – SSL, WEP, WPA, Kerberos,
5. Session layer – Logical Ports 21, 22, 23, 80 etc…
4. Transport – TCP, SPX and UDPmore)
3. Network – IPv4, IPV6, IPX, OSPF, ICMP, IGMP and ARPMP
2. Data Link- 802.11 (WLAN), Wi-Fi, WiMAX, ATM, Ethernet, Token Ring, Frame Relay, PPTP, L2TP and ISDNore)
1. Physical-Hubs, Repeaters, Cables, Optical Fiber, SONET/SDN,Coaxial Cable, Twisted Pair Cable and Connectors (more)

 

Unix File Names

It is important to understand the rules for creating Unix files:
  • Unix is case sensitive! For example, “fileName” is different from “filename”.
  • It is recommended that you limit names to the alphabetic characters, numbers, underscore (_), and dot (.). Dots (.) used in Unix filenames are simply characters and not delimiters between filename components; you may include more than one dot in a filename. Including a dot as the first character of a filename makes the file invisible (hidden) to the normal ls command; use the -a flag of the ls command to display hidden files.
  • Although many systems will allow more, a safe length is 14 characters per file name.
Unix shells typically include several important wildcard characters. The asterisk (*) is used to match 0 or more character (e.g., abc* will match any file beginning with the letters abc), the question mark (?) is used to match any single character, and the left ([) and right (]) square brackets are used to enclose a string of characters, any one of which is to match. Execute the following commands and observe the results:
  ls m*
  ls *.f
  ls *.?
  ls [a-d]*
Notes for PC users: Unix uses forward slashes ( / ) instead of backslashes ( \ ) for directories

Looking at the Contents of Files

You can examine the contents of files using a variety of commands. catmorepghead, and tail are described here. Of course, you can always use an editor; to use vi in “read-only” mode to examine the contents of the file “argtest”, enter:
  vi  -R   argtest
You can now use the standard vi commands to move through the file; however, you will not be able to make any changes to the contents of the file. This option is useful when you simply want to look at a file and want to guarantee that you make no changes while doing so.
Use the vi “”” command to exit from the file.

cat Command

cat is a utility used to conCATenate files. Thus it can be used to join files together, but it is perhaps more commonly used to display the contents of a file on the screen.
Observe the output produced by each of the following commands:
  cd;    cd  xmp
  cat        cars
  cat  -vet  cars
  cat  -n    cars
The semicolon (;) in the first line of this example is a command separator which enables entry of more than one command on a line. When the <Return> key is pressed following this line, the command cd is issued which changes to your home directory. Then the command “cd xmp” is issued to change into the subdirectory “xmp.” Entering this line is equivalent to having entered these commands sequentially on separate lines. These two commands are included in the example to guarantee that you are in the subdirectory containing “cars” and the other example files. You need not enter these commands if you are already in the “xmp” directory created when you copied the example files (see Sample Files if you have not already copied these files).
The “-vet” options enable display of tab, end-of-line, and other non-printable characters within a file; the “-n” option numbers each line as it is displayed.
You can also use the cat command to join files together:
  cat  page1
  cat  page2
  cat  page1  page2 > document
  cat  document
Note: If the file “document” had previously existed, it will be replaced by the contents of files “page1” and “page2”.

Cautions in using the cat command

The cat command should only be used with “text” files; it should not be used to display the contents of binary (e.g., compiled C or FORTRAN programs). Unpredictable results may occur, including the termination of your logon session, when the cat command is used on binary files. Use the command “file *” to display the characteristics of files within a directory prior to using the cat command with any unknown file. You can use the od (enter “man od” for details on use of Octal Dump) command to display the contents of non-text files. For example, to display the contents of “a.out” in both hexadecimal and character representation, enter:
  od  -xc  a.out
Warning! cat (and other Unix commands) can destroy files if not used correctly. For example, as illustrated in the Sobell book, the cat (also cp and mv) command can overwrite and thus destroy files. Observe the results of the following command:
  cat  letter page1 >  letter

Typically Unix does not return a message when a command executes successfully. Here the Unix operating system will attempt to complete the requested command by first initializing the file “letter” and then writing the current contents of “letter” (now nothing) and “page1” into this file. Since “letter” has been reinitialized and is also named as a source file, an error diagnostic is generated. Part of the Unix philosophy is “No news is good news”. Thus the appearance of a message is a warning that the command was not completed successfully.

Now use the “cat” command to individually examine the contents of the files “letter” and “page1”. Observe that the file “letter” does not contain the original contents of the files “letter” and “page1” as was intended.
Use the following command to restore the original file “letter”:
  cp  ~aixstu00/xmp/letter  .

more Command

You may type or browse files using the more command. The “more” command is useful when examining a large file as it displays the file contents one page at a time, allowing each page to be examined at will. As with the man command, you must press the space bar to proceed to the next screen of the file. On many systems, pressing the <b> key will enable you to page backwards in the file. To terminate more at any time, press <q>.
To examine a file with the more command, simply enter:
  more  file_name

See the online manual pages for additional information.

The man command uses the more command to display the manual pages; thus the commands you are familiar with in using man will also work with more.
Not all Unix systems include the more command; some implement the pg command instead. VTAIX includes both the more and pg commands. When using the pgcommand, press <Return> to page down through a file instead of using the space bar.
Observe the results of entering the following commands:
  more  argtest
  pg    argtest

head Command

The head command is used to display the first few lines of a file. This command can be useful when you wish to look for specific information which would be found at the beginning of a file. For example, enter:
  head  argtest

tail Command

The tail command is used to display the last lines of a file. This command can be useful to monitor the status of a program which appends output to the end of a file. For example, enter:
  tail  argtest

Copying, Erasing, Renaming

Warning! The typical Unix operating system provides no ‘unerase’ or ‘undelete’ command. If you mistakenly delete a file you are dependent upon the backups you or the system administrator has maintained in order to recover the file. You need to be careful when using commands like copy and move which may result in overwriting existing files. If you are using the C or Korn Shell, you can create a command alias which will prompt you for verification before overwriting files with these commands.

Copying Files

The cp command is used to copy a file or group of files. You have already seen an example application of the cp command when you copied the sample files to your userid (see Sample Files). Now let’s make a copy of one of these files. Recall that you can obtain a listing of the files in the current directory using the lscommand. Observe the results of the following commands:
  ls  l*
  cp  letter  letter.2
  ls  l*
Note: Unlike many other operating systems, such as PC/DOS, you must specify the target with the copy command; it does not assume the current directory if no “copy-to” target is specified.

Erasing Files

Unix uses the command rm (ReMove) to delete unwanted files. To remove the file “letter.2” which we have just created, enter:
  rm  letter.2

Enter the command “ls l*” to display a list of all files beginning with the letter “l”. Note that letter.2 is no longer present in the current directory.

The remove command can be used with wildcards in filenames; however, this can be dangerous as you might end up erasing files you had wanted to keep. It is recommended that you use the “-i” (interactive) option of rm for wildcard deletes — you will then be prompted to respond with a “y” or “Y” for each file you wish to delete.

Renaming a File

The typical Unix operating system utilities do not include a rename command; however, we can use the mv (MoVe) command (see for additional uses of this command) to “move” Working with Directories) a file from one name to another. Observe the results of the following commands:
  ls  [d,l]*
  mv  letter  document
  ls  [d,l]*
  mv  document letter
  ls  [d,l]*
Note: The first mv command overwrites the file “document” which you had created in an earlier exercise by concatenating “page1” and “page2”. No warning is issued when the mv command is used to move a file into the name of an existing file. If you would like to be prompted for confirmation if the mv command were to overwrite an existing file, use the “-i” (interactive) option of the mv command, e.g.:
  mv  -i  page1  letter
You will now be told that the file “letter” already exists and you will be asked if you wish to proceed with the mv command. Answer anything but “y” or “Y” and the file “letter” will not be overwritten. See Command Alias Applications for information on creating an alias for mv which incorporates the “-i” option to prevent accidental overwrites when renaming files.

Using the Command Line

The command interpreter (shell) provides the mechanism by which input commands are interpreted and passed to the Unix kernel or other programs for processing. Observe the results of entering the following “commands”:
  ./filesize
  ./hobbit
  ./add2
  ls -F

Observe that “filesize” is an executable shell script which displays the size of files. Also note that “./hobbit” and “./add2” generate error diagnostics as there is no command or file with the name “hobbit” and the file “add2” lacks execute permission.

Standard Input and Standard Output

As you have seen previously, Unix expects standard input to come from the keyboard, e.g., enter:
  cat
  my_text
  <Ctrl-D>
Standard output is typically displayed on the terminal screen, e.g., enter:
  cat cars
Standard error (a listing of program execution error diagnostics) is typically displayed on the terminal screen, e.g., enter:
  ls xyzpqrz

Redirection

As illustrated above, many Unix commands read from standard input (typically the keyboard) and write to standard output (typically the terminal screen). The redirection operators enable you to read input from a file (<) or write program output to a file (>). When output is redirected to a file, the program output replaces the original contents of the file if it had previously existed; to add program output to the end of an existing file, use the append redirection operator (>>).
Observe the results of the following command:
  ./a.out
You will be prompted to enter a Fahrenheit temperature. After entering a numeric value, a message will be displayed on the screen informing you of the equivalent Centigrade temperature. In this example, you entered a numeric value as standard input via the keyboard and the output of the program was displayed on the terminal screen.
In the next example, you will read data from a file and have the result displayed on the screen (standard output):
  cat  data.in
  ./a.out  <  data.in

Now you will read from standard input (keyboard) and write to a file:

  ./a.out  >  data.two
  35
  cat  data.two

Now read from standard input and append the result to the existing file:

 ./a.out  <  data.in  >>  data.two
As another example of redirection, observe the result of the following two commands:
  ls  -la  /etc  >  temp
  more  temp

Here we have redirected the output of the ls command to the file “temp” and then used the more command to display the contents of this file a page at a time. In the next section, we will see how the use of pipes could simply this operation.

Additional exercises illustrating the use of redirection are included in Using the C Programming Language and Review of Redirection.

Using Pipes and Filters

A filter is a Unix program which accepts input from standard input and places its output in standard output. Filters add power to the Unix system as programs can be written to use the output of another program as input and create output which can be used by yet another program. A pipe (indicated by the symbol “|” — vertical bar) is used between Unix commands to indicate that the output from the first is to be used as input by the second. Compare the output from the following two commands:
  ls -la /etc
  ls -la /etc | more
The first command above results in a display of all the files in the in the “/etc” directory in long format. It is difficult to make use of this information since it scrolls rapidly across the screen. In the second line, the result of the ls command are piped into the more command. We can now examine this information one screen at a time and can even back up to a prior screen of information if we wished to do so. As you became more familiar with Unix, you will find that piping output to themore command will be very useful in a variety of applications.

The sort command can be used to sort the lines in a file in a desired order. Now enter the following commands and observe the results:
  who
  sort cars
  who  |  sort
The who command displays a listing of logged on users and the sort command enables us to sort information. The second command sorts the lines in the file cars alphabetically by first field and displays the result in standard output. The third command illustrates how the result of the who command can be passed to the sort command prior to being displayed. The result is a listing of logged on users in alphabetical order.
The following example uses the “awk” and “sort” commands to select and reorganize the output generated by the “ls” command:
  ls -l | awk '/:/ {print $5,$9}' | sort -nr
Note: Curly braces do not necessarily display correctly on all output devices. In the above example, there should be a left curly brace in front of the word print and a right curly brace following the number 9.
Observe that the output displays the filesize and filename in decreasing order of size. Here the ls command first generates a “long” listing of the files in the current directory which is piped to the “awk” utility, whose output is in turn piped to the “sort” command.
“awk” is a powerful utility which processes one or more program lines to find patterns within a file and perform selective actions based on what is found. Slash (/) characters are used as delimiters around the pattern which is to be matched and the action to be taken is enclosed in curly braces. If no pattern is specified, all lines in the file are processed and if no action is specified, all lines matching the specified pattern are output. Since a colon (:) is used here, all lines containing file information (the time column corresponding to each file contains a colon) are selected and the information contained in the 5th and 9th columns are output to the sort command.
Note: If the ls command on your system does not include a column listing group membership, use {print $4,$8} instead of the “print” command option of awk listed above.
Here the “sort” command options “-nr” specify that the output from “awk” is to be sorted in reverse numeric order, i.e., from largest to smallest.
For additional information on the “awk” and “sort” commands, see the online man pages or the References included as part of this documentation; the appendix of the Sobell book includes an overview of the “awk” command and several pages of examples illustrating its use.
The preceding command is somewhat complex and it is easy to make a mistake in entering it. If this were a command we would like to use frequently, we could include it in a shell scripts as has been in sample file “filesize”. To use this shell script, simply enter the command:
  ./filesize
      or
  sh  filesize

If you examine the contents of this file with the cat or vi commands, you will see that it contains nothing more the piping of the ls command to awk and then piping the output to sort.

The tee utility is used to send output to a file at the same time it is displayed on the screen:
  who | tee who.out | sort 
  cat who.out
Here you should have observed that a list of logged on users was displayed on the screen in alphabetical order and that the file “who.out” contained an unsorted listing of the same userids.

Some Additional File Handling Commands

Word Count

The command wc displays the number of lines, words, and characters in a file.
To display the number of lines, words, and characters in the file file_name, enter: wc file_name

Comparing the Contents of Two Files: the cmp and diff Commands

The cmp and diff commands are used to compare files; the “comp” command is not used to compare files, but to “compose a message”.
The cmp command can be used for both binary and text files. It indicates the location (byte and line) where the first difference between the two files appears.
The diff command can be used to compare text files and its output shows the lines which are different in the two files: a less than sign (“<“) appears in front of lines from the first file which differ from those in the second file, a greater than symbol (“>”) precedes lines from the second file. Matching lines are not displayed.
Observe the results of the following commands:
  cmp   page1  page2
  diff  page1  page2

Lines 1 and 2 of these two files are identical, lines 3 differ by one character, and page one contains a blank line following line three, while page2 does not.

Hi Friends!! Can’t we have a single SIM in which we can have option to select different operators for DATA services & VOICE respectively?

 

We can select best plan for Voice as well as best available DATA services with individuals requirement.

Eg: In a single SIM; I can select Airtel for my Voice Calls & Aircel for 3G DATA services….

Think and do comment with your precious views…

Learn the advanced features of SONET and SDH; specifically, the different ways of concatenating SONET and SDH signals, different techniques for mapping packet data onto SONET and SDH connections, transparency services for carrier’s carrier applications, and fault management and performance monitoring capabilities.

This section is procured from chapter 3 of  Optical Network Control: Architecture, Protocols, and Standards

  • 3.1 INTRODUCTION
  • 3.2 ALL ABOUT CONCATENATION
  • 3.2.1 Standard Contiguous Concatenation in SONET and SDH
  • 3.2.2 Arbitrary Concatenation
  • 3.2.3 Virtual Concatenation
  • 3.2.3.1 Higher-Order Virtual Concatenation (HOVC)
  • 3.2.3.2 Lower-Order Virtual Concatenation (LOVC)
  • 3.3 LINK CAPACITY ADJUSTMENT SCHEME
  • 3.4 PAYLOAD MAPPINGS
  • 3.4.1 IP over ATM over SONET
  • 3.4.2 Packet over SONET/SDH
  • 3.4.3 Generic Framing Procedure (GFP)
  • 3.4.3.1 GFP Frame Structure
  • 3.4.3.2 GFP Functions
  • 3.4.4 Ethernet over SONET/SDH
  • 3.5 SONET/SDH TRANSPARENCY SERVICES
  • 3.5.1 Methods for Overhead Transparency
  • 3.5.2 Transparency Service Packages
  • 3.6 WHEN THINGS GO WRONG
  • 3.6.1 Transport Problems and Their Detection
  • 3.6.1.1 Continuity Supervision
  • 3.6.1.2 Connectivity Supervision
  • 3.6.1.3 Signal Quality Supervision
  • 3.6.1.4 Alignment Monitoring
  • 3.6.2 Problem Localization and Signal Maintenance
  • 3.6.2.1 Alarm Indication Signals
  • 3.6.2.2 Remote Defect Indication
  • 3.6.3 Quality Monitoring
  • 3.6.3.1 Blips and BIPs
  • 3.6.4 Remote Error Monitoring
  • 3.6.5 Performance Measures
  • 3.7 SUMMARY

3.1 Introduction

In the previous chapter, we described TDM and how it has been utilized in SONET and SDH standards. We noted that when SONET and SDH were developed, they were optimized for carrying voice traffic. At that time no one anticipated the tremendous growth in data traffic that would arise due to the Internet phenomenon. Today, the volume of data traffic has surpassed voice traffic in most networks, and it is still growing at a steady pace. In order to handle data traffic efficiently, a number of new features have been added to SONET and SDH.

In this chapter, we review some of the advanced features of SONET and SDH. Specifically, we describe the different ways of concatenating SONET and SDH signals, and different techniques for mapping packet data onto SONET and SDH connections. We also address transparency services for carrier’s carrier applications, as well as fault management and performance monitoring capabilities. The subject matter covered in this chapter will be used as a reference when we discuss optical control plane issues in later chapters. A rigorous understanding of this material, however, is not a prerequisite for dealing with the control plane topics.

3.2 All about Concatenation

Three types of concatenation schemes are possible under SONET and SDH. These are:

  • Standard contiguous concatenation
  • Arbitrary contiguous concatenation
  • Virtual concatenation

These concatenation schemes are described in detail next.

3.2.1 Standard Contiguous Concatenation in SONET and SDH

SONET and SDH networks support contiguous concatenation whereby a few standardized “concatenated” signals are defined, and each concatenated signal is transported as a single entity across the network [ANSI95a, ITU-T00a]. This was described briefly in the previous chapter.

The concatenated signals are obtained by “gluing” together the payloads of the constituent signals, and they come in fixed sizes. In SONET, these are called STS-Nc Synchronous Payload Envelopes (SPEs), where N = 3X and X is restricted to the values 1, 4, 16, 64, or 256. In SDH, these are called VC-4 (equivalent to STS-3c SPE), and VC-4-Xc where X is restricted to 1, 4, 16, 64, or 256.

The multiplexing procedures for SONET (SDH) introduce additional constraints on the location of component STS-1 SPEs (VC-4s) that comprise the STS-Nc SPE (VC-4-Xc). The rules for the placement of standard concatenated signals are [ANSI95a]:

  1. Concatenation of three STS-1s within an STS-3c: The bytes from concatenated STS-1s shall be contiguous at the STS-3 level but shall not be contiguous when interleaved to higher-level signals. When STS-3c signals are multiplexed to a higher rate, each STS-3c shall be wholly contained within an STS-3 (i.e., occur only on tributary input boundaries 1–3, 4–6, 7–9, etc.). This rule does not apply to SDH.
  2. Concatenation of STS-1s within an STS-Nc (N = 3X, where X = 1, 4, 16, 64, or 256). Such concatenation shall treat STS-Nc signals as a single entity. The bytes from concatenated STS-1s shall be contiguous at the STS-N level, but shall not be contiguous when multiplexed on to higher-level signals. This also applies to SDH, where the SDH term for an STS-Nc is an AU-4-Xc where X = N/3.
  3. When the STS-Nc signals are multiplexed to a higher rate, these signals shall be wholly contained within STS-M boundaries, where M could be 3, 12, 48, 192, or 768, and its value must be the closest to, but greater than or equal to N (e.g., if N = 12, then the STS-12c must occur only on boundaries 1–12, 13–24, 25–36, etc.). In addition to being contained within STS-M boundaries, all STS-Nc signals must begin on STS-3 boundaries.

The primary purpose of these rules is to ease the development burden for hardware designers, but they can seriously affect the bandwidth efficiency of SONET/SDH links.

In Figure 3-1(a), an STM-16 (OC-48) signal is represented as a set of 16 time slots, each of which can contain a VC-4 (STS-3c SPE). Let us examine the placement of VC-4 and VC-4-4c (STS-3c and STS-12c SPE) signals into this structure, in line with the rules above. In particular a VC-4-4c (STS-12c SPE) must start on boundaries of 4. Figure 3-1(b) depicts how the STM-16 has been filled with two VC-4-4c (STS-12c) and seven VC-4 signals. In Figure 3-1(c), three of the VC-4s have been removed, that is, are no longer in use. Due to the placement restrictions, however, a VC 4-4c cannot be accommodated in this space. In Figure 3-1(d), the STM-16 has been “regroomed,” that is, VC-4 #5 and VC-4 #7 have been moved to new timeslots. Figure 3-1(e) shows how the third VC-4-4c is accommodated.

Figure 3-1. Timeslot Constraints and Regrooming with Contiguous (Standard) Concatenation

3.2.2 Arbitrary Concatenation

In the above example, a “regrooming” operation was performed to make room for a signal that could not be accommodated with the standard contiguous concatenation rules. The problem with regrooming is that it is service impacting, that is, service is lost while the regrooming operation is in progress. Because service impacts are extremely undesirable, regrooming is not frequently done, and the bandwidth is not utilized efficiently.

To get around these restrictions, some manufacturers of framers, that is, the hardware that processes the SDH multiplex section layer (SONET line layer), offer a capability known as “flexible” or arbitrary concatenation. With this capability, there are no restrictions on the size of an STS-Nc (VC-4-Xc) or the starting time slot used by the concatenated signal. Also, there are no constraints on adjacencies of the STS-1 (VC-4-Xc) time slots used to carry it, that is, the signals can use any combination of available time slots. Figure 3-2 depicts how the sequence of signals carried over the STM-16 of Figure 3-1 can be accommodated without any regrooming, when the arbitrary concatenation capability is available.

Figure 3-2. Timeslot Usage with Arbitrary Concatenation

3.2.3 Virtual Concatenation

As we saw earlier, arbitrary concatenation overcomes the bandwidth inefficiencies of standard contiguous concatenation by removing the restrictions on the number of components and their placement within a larger concatenated signal. Standard and arbitrary contiguous concatenation are services offered by the network, that is, the network equipment must support these capabilities. The ITU-T and the ANSI T1 committee have standardized an alternative, called virtual concatenation. With virtual concatenation, SONET and SDH PTEs can “glue” together the VCs or SPEs of separately transported fundamental signals. This is in contrast to requiring the network to carry signals as a single concatenated unit.

3.2.3.1 HIGHER-ORDER VIRTUAL CONCATENATION (HOVC)

HOVC is realized under SONET and SDH by the PTEs, which combine either multiple STS-1/STS-3c SPEs (SONET), or VC-3/VC-4 (SDH). Recall that the VC-3 and STS-1 SPE signals are nearly identical except that a VC-3 does not contain the fixed stuff bytes found in columns 30 and 59 of an STS-1 SPE. A SONET STS-3c SPE is equivalent to a SDH VC-4.

These component signals, VC-3s or VC-4s (STS-1 SPEs or STS-3c SPEs), are transported separately through the network to an end system and must be reassembled. Since these signals can take different paths through the network, they may experience different propagation delays. In addition to this fixed differential delay between the component signals, there can also be a variable delay component that arises due to the different types of equipment processing the signals and the dynamics of the fiber itself. Note that heating and cooling effects can affect the propagation speed of light in a fiber, leading to actual measurable differences in propagation delay.

The process of mapping a concatenated container signal, that is, the raw data to be transported, into a virtually concatenated signal is shown in Figure 3-3. Specifically, at the transmitting side, the payload gets packed in X VC-4s just as if these were going to be contiguously concatenated. Now the question is, How do we identify the component signals and line them up appropriately given that delays for the components could be different?

Figure 3-3. Mapping a Higher Rate Payload in a Virtually Concatenated Signal (from [ITU-T00a])

The method used to align the components is based on the multiframe techniques described in Chapter 2. A jumbo (very long) multiframe is created by overloading the multiframe byte H4 in the path overhead. Bits 5–8 of the H4 byte are incremented in each 125µs frame to produce a multiframe consisting of 16 frames. In this case, bits 5–8 of H4 are known as the multiframe indicator 1 (MFI1). This multiframe will form the first stage of a two-stage multiframe. In particular, bits 1–4 of the H4 byte are used in a way that depends on the position in the first stage of the multiframe. This is shown in Table 3-1.

Within the 16-frame first stage multiframe, a second stage multiframe indicator (MFI2) is defined utilizing bits 1–4 of H4 in frames 0 and 1, giving a total of 8 bits per frame. It is instructive to examine the following:

  1. How long in terms of the number of 125µs frames is the complete HOVC multiframe structure? Answer: The base frame (MFI1) is 16 frames long, and the second stage is 28 = 256 frames long. Since this is a two-stage process, the lengths multiply giving a multiframe that is 16 × 256 = 4096 frames long.
  2. What is the longest differential delay, that is, delay between components that can be compensated? Answer: The differential delay must be within the duration of the overall multiframe structure, that is, 125µS × 4096 = 512mS, that is, a little over half a second.
  3. Suppose that an STS-1-2v is set up for carrying Ethernet traffic between San Francisco and New York such that one STS-1 goes via a satellite link and the other via conventional terrestrial fiber. Will this work? Answer: Assuming that a geo-synchronous satellite is used, then the satellite’s altitude would be about 35775 km. Given that the speed of light is 2.99792 × 108 m/sec, this leads to a round trip delay of about 239 ms. If the delay for the fiber route is 20 ms, then the differential delay is 209 ms, which is within the virtual concatenation range. Also, since the average circumference of the earth is only 40,000 km, this frame length should be adequate for the longest fiber routes.

Table 3-1. Use of Bits 1–4 in H4 Byte for First Stage Multiframe Indication (MFI1)

 

Multi-Frame Indicator 1 (MFI1) Meaning of Bits 1–4 in H4
0 2nd multiframe indicator MFI2 MSB (bits 1–4)
1 2nd multiframe indicator MFI2 LSB (bits 5–8)
2–13 Reserved (0000)
14 Sequence indicator SQ MSB (bits 1–4)
15 Sequence indicator SQ LSB (bits 5–8)

 

Now, the receiver must be able to distinguish the different components of a virtually concatenated signal. This is accomplished as follows. In frames 14 and 15 of the first stage multiframe, bits 1–4 of H4 are used to give a sequence indicator (SQ). This is used to indicate the components (and not the position in the multiframe). Due to this 8-bit sequence indicator, up to 256 components can be accommodated in HOVC. Note that it is the receiver’s job to compensate for the differential delay and to put the pieces back together in the proper order. The details of how this is done are dependent on the specific implementation.

3.2.3.2 LOWER-ORDER VIRTUAL CONCATENATION (LOVC)

The virtual concatenation of lower-order signals such as VT1.5s (VC-11), VT2 (VC-12), and so on are based on the same principles as described earlier. That is, a sequence number is needed to label the various components that make up the virtually concatenated signal, and a large multiframe structure is required for differential delay compensation. In the lower-order case, however, there are fewer overhead bits and bytes to spare so the implementation may seem a bit complex. Let us therefore start with the capabilities obtained.

LOVC Capabilities and Limitations

Table 3-2 lists the LOVC signals for SONET/SDH, the signals they can be contained in and the limits on the number of components that can be concatenated. The last two columns are really the most interesting since they show the range of capacities and the incremental steps of bandwidth.

LOVC Implementation

Let us first examine how the differential delay compensating multiframe is put together. This is done in three stages. Recall that the SONET VT overhead (lower-order SDH VC overhead) is defined in a 500 µs multiframe, as indicated in the path layer multiframe indicator H4. This makes available the four VT overhead bytes V5, J2, Z6, and Z7, from one SONET/SDH frame byte. Since a number of bits in these bytes are used for other purposes, an additional second stage of multiframe structure is used to define extended VT signal labels.

This works as follows (note that SDH calls the Z7 byte as K4 but uses it the same way): First of all, the V5 byte indicates if the extended signal label is being used. Bits 5 through 7 of V5 provide a VT signal label. The signal label value of 101 indicates that a VT mapping is given by the extended signal label in the Z7 byte. If this is the case, then a 1-bit frame alignment signal “0111 1111 110” is sent in bit 1 of Z7, called the extended signal label bit. The length of this second stage VT level multiframe (which is inside the 500 µs VT multiframe) is 32 frames. The extended signal label is contained in bits 12–19 of the multiframe. Multiframe position 20 contains “0.” The remaining 12 bits are reserved for future standardization.

Table 3-2. Standardized LOVC Combinations and Limits

 

Signal SONET/SDH Carried in SONET/SDH X Capacity (kbit/s) In steps of (kbit/s)
VT1.5-XSPE/VC-11-Xv STS-1/VC-3 1 to 28 1600 to 44800 1600
VT2-XSPE/VC-12-Xv STS-1/VC-3 1 to 21 2176 to 45696 2176
VT3-XSPE STS-1 1 to 14 3328 to 46592 3328
VT6-XSPE/VC-2-Xv STS-1/VC-3 1 to 7 6784 to 47448 6784
VT1.5/VC-11-Xv STS-3c 1 to 64 1600 to 102400 1600
VT2/VC-12-Xv STS-3c 1 to 63 2176 to 137088 2176
VT3-XSPE STS-3c 1 to 42 3328 to 139776 3328
VT6-XSPE/VC-2-Xv STS-3c 1 to 21 6784 to 142464 6784
VT1.5/VC-11-Xv unspecified 1 to 64 1600 to 102400 1600
VT2/VC-12-Xv unspecified 1 to 64 2176 to 139264 2176
VT3-XSPE unspecified 1 to 64 3328 to 212992 3328
VT6-XSPE unspecified 1 to 64 6784 to 434176 6784
Note: X is limited to 64 due the sequence indicator having 6 bits.

 

Bit 2 of the Z7 byte is used to convey the third stage of the multistage multiframe in the form of a serial string of 32 bits (over 32 four-frame multi-frames and defined by the extended signal label). This is shown in Figure 3-4. This string is repeated every 16 ms (32 bits × 500 µs/bit) or every 128 frames.

Figure 3-4. Third Stage of LOVC Multiframe Defined by Bit 2 of the Z7 Byte over the 32 Frame Second Stage Multiframe

The third stage string consists of the following fields: The third stage virtual concatenation frame count is contained in bits 1 to 5. The LOVC sequence indicator is contained in bits 6 to 11. The remaining 21 bits are reserved for future standardization.

Let us now consider a concrete example. Suppose that there are three stages of multiframes with the last stage having 5 bits dedicated to frame counting. What is the longest differential delay that can be compensated and in what increments? The first stage was given by the H4 byte and is of length 4, resulting in 4 × 125 µs = 500 µs. The second stage was given by the extended signal label (bit 1 of Z7) and it is of length 32. Since this is inside the first stage, the lengths multiply, resulting in 32 × 500 µs = 16 ms. The third stage, which is within the 32-bit Z7 string, has a length of 25 = 32 and is contained inside the second stage. Hence, the lengths multiply, resulting in 32 × 16 ms = 512 ms. This is the same compensation we showed with HOVC. Since the sequence indicator of the third stage is used to line up the components, the delay compensation is in 16 ms increments.

3.3 Link Capacity Adjustment Scheme

Virtual concatenation allows the flexibility of creating SONET/SDH pipes of different sizes. The Link Capacity Adjustment Scheme or LCAS [ITU-T01a] is a relatively new addition to the SONET/SDH standard. It is designed to increase or decrease the capacity of a Virtually Concatenated Group (VCG) in a hitless fashion. This capability is particularly useful in environments where dynamic adjustment of capacity is important. The LCAS mechanism can also automatically decrease the capacity if a member in a VCG experiences a failure in the network, and increase the capacity when the fault is repaired. Although autonomous addition after a failure is repaired is hitless, removal of a member due to path layer failures is not hitless. Note that a “member” here refers to a VC (SDH) or an SPE (SONET). In the descriptions below, we use the term member to denote a VC.

Note that virtual concatenation can be used without LCAS, but LCAS requires virtual concatenation. LCAS is resident in the H4 byte of the path overhead, the same byte as virtual concatenation. The H4 bytes from a 16-frame sequence make up a message for both virtual concatenation and LCAS. Virtual concatenation uses 4 of the 16 bytes for its MFI and sequence numbers. LCAS uses 7 others for its purposes, leaving 5 reserved for future development. While virtual concatenation is a simple labeling of individual STS-1s within a channel, LCAS is a two-way handshake protocol. Status messages are continuously exchanged and consequent actions taken.

From the perspective of dynamic provisioning enabled by LCAS, each VCG can be characterized by two parameters:

  • XMAX, which indicates the maximum size of the VCG and it is usually dictated by hardware and/or standardization limits
  • XPROV, which indicates the number of provisioned members in the VCG

With each completed ADD command, XPROV increases by 1, and with each completed REMOVE command XPROV decreases by 1. The relationship 0 ≤ XPROV ≤ XMAX always holds. The operation of LCAS is unidirectional. This means that in order to bidirectionally add or remove members to or from a VCG, the LCAS procedure has to be repeated twice, once in each direction. These actions are independent of each other, and they are not required to be synchronized.

The protocols behind LCAS are relatively simple. For each member in the VCG (total of XMAX), there is a state machine at the transmitter and a state machine at the receiver. The state machine at the transmitter can be in one of the following five states:

  1. IDLE: This member is not provisioned to participate in the VCG.
  2. NORM: This member is provisioned to participate in the VCG and has a good path to the receiver.
  3. DNU: This member is provisioned to participate in the VCG and has a failed path to the receiver.
  4. ADD: This member is in the process of being added to the VCG.
  5. REMOVE: This member is in the process of being deleted from the VCG.

The state machine at the receiver can be in one of the following three states:

  1. IDLE: This member is not provisioned to participate in the VCG.
  2. OK: The incoming signal for this member experiences no failure condition. Or, the receiver has received and acknowledged a request for addition of this member.
  3. FAIL: The incoming signal for this member experiences some failure condition, or an incoming request for removal of a member has been received and acknowledged.

The transmitter and the receiver communicate using control packets to ensure smooth transition from one state to another. The control packets consist of XMAX control words, one for each member of the VCG. The following control words are sent from source to the receiver in order to carry out dynamic provisioning functions. Each word is associated with a specific member (i.e., VC) in the VCG.

  • FADD: Add this member to the group.
  • FDNU: Delete this member from the group.
  • FIDLE: Indicate that this VC is currently not a member of the group.
  • FEOS: Indicate that this member has the highest sequence number in the group (EOS denotes End of Sequence).
  • FNORM: Indicate that this member is normal part of the group and does not have the highest sequence number.

The following control words are sent from the receiver to the transmitter. Each word is associated with a specific VC in the VCG.

  • RFAIL and ROK: These messages capture the status of all the VCG members at the receiver. The status of all the members is returned to the transmitter in the control packets of each member. The transmitter can, for example, read the information from member No. 1 and, if that is unavailable, the same information from member No. 2, and so on. As long as no return bandwidth is available, the transmitter uses the last received valid status.
  • RRS_ACK: This is a bit used to acknowledge the detection of renumbering of the sequence or a change in the number of VCG members. This acknowledgment is used to synchronize the transmitter and the receiver.

The following is a typical sequence for adding a member to the group. Multiple members can be added simultaneously for fast resizing.

  1. The network management system orders the source to add a new member (e.g., a VC) to the existing VCG.
  2. The source node starts sending FADD control commands in the selected member. The destination notices the FADD command and returns an ROK in the link status for the new member.
  3. The source sees the ROK, assigns the member a sequence number that is one higher than the number currently in use.
  4. At a frame boundary, the source includes the VC in the byte interleaving and sets the control command to FEOS, indicating that this VC is in use and it is the last in the sequence.
  5. The VC that previously was “EOS ” now becomes “NORM” (normal) as it is no longer the one with the highest sequence number.

The following is a typical sequence for deleting the VC with the highest sequence number (EOS) from a VCG:

  1. The network management system orders the source to delete a member from the existing VCG.
  2. The source node starts sending FIDLE control commands in the selected VC. It also sets the member with the next highest sequence number as the EOS and sends FEOS in the corresponding control word.
  3. The destination notices the FIDLE command and immediately drops the channel from the reassembly process. It also responds with RFAIL and inverts the RRS_ACK bit.

In this example, the deleted member has the highest sequence number. If this is not the case, then the other members with sequence numbers between the newly deleted member and the highest sequence number are renumbered.

LCAS and virtual concatenation add tremendous amount of flexibility to SONET and SDH. Although SONET and SDH were originally designed to transport voice traffic, advent of these new mechanisms has made it perfectly suitable for carrying more dynamic and bursty data traffic. In the next section, we discuss mechanisms for mapping packet payloads into SONET and SDH SPEs.

3.4 Payload Mappings

So far, the multiplexing structure of SONET and SDH has been described in detail. To get useful work out of these different sized containers, a payload mapping is needed, that is, a systematic method for inserting and removing the payload from a SONET/SDH container. Although it is preferable to use standardized mappings for interoperability, a variety of proprietary mappings may exist for various purposes.

In this regard, one of the most important payloads carried over SONET/SDH is IP. Much of the bandwidth explosion that set the wheels in motion for this book came from the growth in IP services. Hence, our focus is mainly on IP in the rest of this chapter. Figure 3-5 shows different ways of mapping IP packets into SONET/SDH frames. In the following, we discuss some of these mechanisms.

Figure 3-5. Different Alternatives for Carrying IP Packets over SONET

3.4.1 IP over ATM over SONET

The “Classical IP over ATM” solution supports robust transmission of IP packets over SONET/SDH using ATM encapsulation. Under this solution, each IP packet is encapsulated into an ATM Adaptation Layer Type 5 (AAL5) frame using multiprotocol LLC/SNAP encapsulation [Perez+95]. The resulting AAL5 Protocol Data Unit (PDU) is segmented into 48-byte payloads for ATM cells. ATM cells are then mapped into a SONET/SDH frame.

One of the problems with IP-over-ATM transport is that the protocol stack may introduce a bandwidth overhead as high as 18 percent to 25 percent. This is in addition to the approximately 4 percent overhead needed for SONET. On the positive side, ATM permits sophisticated traffic engineering, flexible routing, and better partitioning of the SONET/SDH bandwidth. Despite the arguments on the pros and cons of the method, IP-over-ATM encapsulation continues to be one of the main mechanisms for transporting IP over SONET/SDH transport networks.

3.4.2 Packet over SONET/SDH

ATM encapsulation of IP packets for transport over SONET/SDH can be quite inefficient from the perspective of bandwidth utilization. Packet over SONET/SDH (or POS) addresses this problem by eliminating the ATM encapsulation, and using the Point-to-Point Protocol (PPP) defined by the IETF [Simpson94]. PPP provides a general mechanism for dealing with point-to-point links and includes a method for mapping user data, a Link Control Protocol (LCP), and assorted Network Control Protocols (NCPs). Under POSPPP encapsulated IP packets are framed using high-Level Data Link Control (HDLC) protocol and mapped into the SONET SPE or SDH VC [Malis+99]. The main function of HDLC is to provide framing, that is, delineation of the PPP encapsulated IP packets across the synchronous transport link. Standardized mappings for IP into SONET using PPP/HDLC have been defined in IETF RFC 2615 [Malis+99] and ITU-T Recommendation G.707 [ITU-T00a].

Elimination of the ATM layer under POS results in more efficient bandwidth utilization. However, it also eliminates the flexibility of link bandwidth management offered by ATMPOS is most popular in backbone links between core IP routers running at 2.5 Gbps and 10 Gbps speeds. IP over ATM is still popular in lower-speed access networks, where bandwidth management is essential.

During the initial deployment of POS, it was noticed that the insertion of packets containing certain bit patterns could lead to the generation of the Loss of Frame (LOF) condition. The problem was attributed to the relatively short period of the SONET section (SDH regenerator section) scrambler, which is only 127 bits and synchronized to the beginning of the frame. In order to alleviate the problem, an additional scrambling operation is performed on the HDLC frames before they are placed into the SONET/SDH SPEs. This procedure is depicted in Figure 3-6.

Figure 3-6. Packet Flow for Transmission and Reception of IP over PPP over SONET/SDH

3.4.3 Generic Framing Procedure (GFP)

GFP [ITU-T01b] was initially proposed as a solution for transporting data directly over dark fibers and WDM links. But due to the huge installed base of SONET/SDH networks, GFP soon found applications in SONET/SDH networks. The basic appeal of GFP is that it provides a flexible encapsulation framework for both block-coded [Gorsche+02] and packet oriented [Bonenfant+02] data streams. It has the potential of replacing a plethora of proprietary framing procedures for carrying data over existing SONET/SDH and emerging WDM/OTN transport.

GFP supports all the basic functions of a framing procedure including frame delineation, frame/client multiplexing, and client data mapping [ITU-T01b]. GFP uses a frame delineation mechanism similar to ATM, but generalizes it for both fixed and variable size packets. As a result, under GFP, it is not necessary to search for special control characters in the client data stream as required in 8B/10B encoding,1 or for frame delineators as with HDLC framing. GFP allows flexible multiplexing whereby data emanating from multiple clients or multiple client sessions can be sent over the same link in a point-to-point or ring configuration. GFP supports transport of both packet-oriented (e.g., Ethernet, IP, etc.) and character-oriented (e.g., Fiber Channel) data. Since GFP supports the encapsulation and transport of variable-length user PDUs, it does not need complex segmentation/reassembly functions or frame padding to fill unused payload space. These careful design choices have substantially reduced the complexity of GFP hardware, making it particularly suitable for high-speed transmissions.

In the following section, we briefly discuss the GFP frame structure and basic GFP functions.

3.4.3.1 GFP FRAME STRUCTURE

GFP frame consists of a core header and a payload area, as shown in Figure 3-7. The GFP core header is intended to support GFP-specific data link management functions. The core header also allows GFP frame delineation independent of the content of the payload. The GFP core header is 4 bytes long and consists of two fields:

Figure 3-7. Generic Framing Procedure Frame Structure

Payload Length Indicator (PLI) Field

A 2-byte field indicating the size of the GFP payload area in bytes.

Core Header Error Correction (cHEC) Field

A 2-octet field containing a cyclic redundancy check (CRC) sequence that protects the integrity of the core header.

The payload area is of variable length (0–65,535 octets) and carries client data such as client PDUs, client management information, and so on. Structurally, the payload area consists of a payload header and a payload information field, and an optional payload Frame Check Sequence (FCS) field. The FCS information is used to detect the corruption of the payload.

Payload Header

The variable length payload header consists of a payload type field and a type Header Error Correction (tHEC) field that protects the integrity of the payload type field. Optionally, the payload header may include an extension header. The payload type field consists of the following subfields:

  • Payload Type Identifier (PTI): This subfield identifies the type of frame. Two values are currently defined: user data frames and client management frames.
  • Payload FCS Indicator (PFI): This subfield indicates the presence or absence of the payload FCS field.
  • Extension Header Identifier (EXI): This subfield identifies the type of extension header in the GFP frame. Extension headers facilitate the adoption of GFP for different client-specific protocols and networks. Three kinds of extension headers are currently defined: a null extension header, a linear extension header for point-to-point networks, and a ring extension header for ring networks.
  • User Payload Identifier (UPI): This subfield identifies the type of payload in the GFP frame. The UPI is set according to the transported client signal type. Currently defined UPI values include Ethernet, PPP (including IP and MPLS), Fiber Channel [Benner01], FICON [Benner01], ESCON [Benner01], and Gigabit Ethernet. Mappings for 10/100 Mb/s Ethernet and digital video broadcast, among others, are under consideration.

Payload Information Field

This field contains the client data. There are two modes of client signal payload adaptation defined for GFP: frame-mapped GFP (GFP-F) applicable to most packet data types, and transparent-mapped GFP (GFP-T) applicable to 8B/10B coded signals. Frame-mapped GFP payloads consist of variable length packets. In this mode, client frame is mapped in its entirety into one GFP frame. Examples of such client signals include Gigabit Ethernet and IP/PPP. With transparent-mapped GFP, a number of client data characters, mapped into efficient block codes, are carried within a GFP frame.

3.4.3.2 GFP FUNCTIONS

The GFP frame structure was designed to support the basic functions provided by GFP, namely, frame delineation, client/frame multiplexing, header/payload scrambling, and client payload mapping. In the following, we discuss each of these functions.

Frame Delineation

The GFP transmitter and receiver operate asynchro nously. The transmitter inserts GFP frames on the physical link according to the bit/byte alignment requirements of the specific physical interface (e.g., SONET/SDH, OTN, or dark fiber). The GFP receiver is responsible for identifying the correct GFP frame boundary at the time of link initialization, and after link failures or loss of frame events. The receiver “hunts” for the start of the GFP frame using the last received four octets of data. The receiver first computes the cHEC value based on these four octets. If the computed cHEC matches the value in the (presumed) cHEC field of the received data, the receiver tentatively assumes that it has identified the frame boundary. Otherwise, it shifts forward by 1 bit and checks again. After a candidate GFP frame has been identified, the receiver waits for the next candidate GFP frame based on the PLI field value. If a certain number of consecutive GFP frames are detected, the receiver transitions into a regular operational state. In this state, the receiver examines the PLI field, validates the incoming cHEC field, and extracts the framed PDU.

Client/Frame Multiplexing

GFP supports both frame and client multiplexing. Frames from multiple GFP processes, such as idle frames, client data frames, and client management frames, can be multiplexed on the same link. Client data frames get priority over management frames. Idle frames are inserted when neither data nor management frames are available for transmission.

GFP supports client-multiplexing capabilities via the GFP linear and ring extension headers. For example, linear extension headers (see Figure 3-7) contain an 8-bit channel ID (CID) field that can be used to multiplex data from up to 256 client sessions on a point-to-point link. An 8-bit spare field is available for future use. Various proposals for ring extension headers are currently being considered for sharing GFP payload across multiple clients in a ring environment.

Header/Payload Scrambling

Under GFP, both the core header and the payload area are scrambled. Core header scrambling ensures that an adequate number of 0-1 transitions occur during idle data conditions (thus allowing the receiver to stay synchronized with the transmitter). Scrambling of the GFP payload area ensures correct operation even when the payload information is coincidentally the same as the scrambling word (or its inverse) from frame-synchronous scramblers such as those used in the SONET line layer (SDH RS layer).

Client Payload Mapping

As mentioned earlier, GFP supports two types of client payload mapping: frame-mapped and transparent-mapped. Frame mapping of native client payloads into GFP is intended to facilitate packet-level handling of incoming PDUs. Examples of such client signals include IEEE 802.3 Ethernet MAC frames, PPP/IP packets, or any HDLC framed PDU. Here, the transmitter encapsulates an entire frame of the client data into a GFP frame. Frame multiplexing is supported with frame-mapped GFP. Frame-mapped GFP uses the basic frame structure of a GFP client frame, including the required payload header.

Transparent mapping is intended to facilitate the transport of 8B/10B block-coded client data streams with low transmission latency. Transparent mapping is particularly applicable to Fiber Channel, ESCON, FICON, and Gigabit Ethernet. Instead of buffering an entire client frame and then encapsulating it into a GFP frame, the individual characters of the client data stream are extracted, and a fixed number of them are mapped into periodic fixed-length GFP frames. The mapping occurs regardless of whether the client character is a data or control character, which thus preserves the client 8B/10B control codes. Frame multiplexing is not precluded with transparent GFP. The transparent GFP client frame uses the same structure as the frame-mapped GFP, including the required payload header.

3.4.4 Ethernet over SONET/SDH

As shown in Figure 3-5, there are different ways of carrying Ethernet frames over SONET/SDH, OTN, and optical fiber. Ethernet MAC frames can be encapsulated in GFP frames and carried over SONET/SDH. Also shown in the figure are the different physical layer encoding schemes, including Gigabit Ethernet physical layer, and 10Gigabit Ethernet physical (PHY) layer optimized for LAN and WAN. Gigabit Ethernet physical layer is 8B/10B coded data stream, and it can be encapsulated into GFP frames and carried over SONET/SDH. 10-Gigabit Ethernet WAN PHY is SONET/SDH encoded, and hence it can be directly mapped into STS-192/STM-16 frames.

3.5 SONET/SDH Transparency Services

SONET and SDH have the following notions of transparency built-in, as described in Chapter 2:

  1. Path transparency, as provided by the SONET line and SDH multiplex section layers. This was the original intent of SONET and SDH, that is, transport of path layer signals transparently between PTEs.
  2. SONET line and SDH multiplex section transparency, as provided by the SONET section and SDH regenerator section layers, respectively.
  3. SONET section and SDH regenerator section transparency, as provided by the physical layer.

Of these, only (1) was considered a “user service” within SONET and SDH. There are reasons now to consider (2) and (3) as services, in addition to newer transparency services.

Figure 3-8 shows a typical scenario where transparency services may be desired. Here, two SONET networks (labeled “Domain 1”) are separated by an intervening optical transport network of some type (labeled “Domain 2”). For instance, Domain 1 could consist of two metro networks under a single administration, separated by a core network (Domain 2) under a different administration. The two disjoint parts of Domain 1 are interconnected by provisioning a “link” between network elements NE1 and NE2, as shown. The characteristics of this link depend on the type of transparency desired. In general, transparency allows NE1 and NE2 to use the functionality provided by SONET overhead bytes in various layers. For instance, section transparency allows the signal from NE1 to NE2 to pass through Domain 2 without any overhead information being modified in transit. An all-optical network or a network with transparent regenerators can provide section layer transparency. This service is equivalent to having a dedicated wavelength (lambda) between NE1 and NE2. Thus, the service is often referred to as a lambda service, even if the signal is electrically regenerated within the network. Section transparency allows NE1 and NE2 to terminate the section layer and use the section (and higher layer) overhead bytes for their own purposes.

Figure 3-8. Networking Scenario Used to Define SONET/SDH Transparency Services

If the OC-N to be transported between NE1 and NE2 is the same size (in terms of capacity) as those used within the optical network, then the section transparency service is a reasonable approach. If the optical network, however, deals with signals much larger than these OC-N signals, then there is the potential for inefficient resource utilization. For example, suppose the optical network is composed of DWDM links and switches that can effectively deal with OC-192 signals. A “lambda” in this network could indeed accommodate an OC-12 signal, but only 1/16th of the capacity of that lambda will be used. In such a case, the OC-12 signal has to be multiplexed in some way into an OC-192 signal. But SONET (SDH) multiplexing takes place at the line (multiplex section) layer. Hence, there is no standard way to convey the OC-12 overhead when multiplexing the constituent path signals into an OC-192 signal. This means that section and line overhead bytes presented by NE1 will be modified within Domain 2. How then to transfer the overhead bytes transparently across Domain 2? Before we examine the methods for accomplishing this, it is instructive to look at the functionality provided by overhead bytes and what it means to support transparency.

Tables 3-3 and 3-4 list the overhead bytes available at different layers, the functionality provided and when the bytes are updated (refer to Figures 2-4 and 2-5).

Table 3-3. SONET Section (SDH Regenerator Section) Overhead Bytes and Functionality

 

Overhead Bytes Comments
A1 and A2 (Framing) These are repeated in all STS-1 signals within an OC-N. No impact on transparency.
J0 (Trace) Only conveyed in the 1st STS-1, and covers entire frame. J0 bytes in signals 2–N are reserved for growth, i.e., Z0. Used to identify entire section layer signal.
B1 (Section BIP-8) Only conveyed in the 1st STS-1, and covers entire frame. B1 bytes in signals 2–N are undefined. B1 byte must be updated if section, line or path layer content changes.
E1 (Orderwire)

F1 (User)

Only conveyed in the 1st STS-1, and covers for entire frame. E1 and F1 in signals 2–N are undefined.
D1-D3 (Section DCC) Only conveyed in the 1st STS-1, and covers the entire frame. D1-D3 bytes in signals 2–N are undefined.

 

Table 3-4. SONET Line (SDH Multiplex Section) Overhead Bytes and Functionality

 

Overhead Bytes Comments
H1, H2, H3 (Pointer bytes) These are repeated in all STS-1s within an STS-N.
B2 (Line BIP-8) This is used for all STS-1s within an STS-N. Must be updated if line or path layer content changes. Used to determine signal degrade conditions.
K1, K2 (APS bytes) Only conveyed in the 1st STS-1 signal, and covers entire line. This space in signals 2 – N are undefined. This is the line APS functionality.
D4-D12 (Line DCC) Only conveyed in the 1st STS-1 for the entire line. D4–D12 bytes in signals 2 – N are undefined.
S1 (Synchronization byte) Only conveyed in the 1st STS-1, and carries the synchronization status message for the entire line. S1 bytes in STS-1 signals 2 – N are reserved for growth (Z1 byte). Note that if a re-multiplexing operation were to take place, this byte cannot be carried through.
M0, M1, (Line, Remote Error indication) M0 or M1 is conveyed in the Nth STS of the STS-N signal. If > 1, this byte is called M1. If N = 1, this byte is called M0. When N > 1, the corresponding bytes in signals 1 to N – 1 are reserved for growth (Z2 byte).
E2 (Line order wire) Only conveyed in the 1st STS-1, and covers the entire line. The E2 bytes in signals 2 – N are undefined.

 

With standard SONET/SDH path layer multiplexing, the H1–H3 (pointer) bytes must be modified when the clocks are different for the streams to be multiplexed. The B2 byte must be updated when any of the line layer bytes are changed. Also related to timing is the S1 byte, which reports on the synchronization status of the line. This byte has to be regenerated if multiplexing is performed. Thus, it is not possible to preserve all the overhead bytes when the signal from NE1 is multiplexed with other signals within Domain 2. The additional procedures that must be performed to achieve transparency are discussed next.

3.5.1 Methods for Overhead Transparency

We can group the transport overhead bytes into five categories as follows:

  1. Framing bytes A1 and A2, which are always terminated and regenerated
  2. Pointer bytes H1, H2 and H3, which must be adjusted for multiplexing, and the S1 byte
  3. General overhead bytes: J0, E1, F1, D1-D3, K1, K2, D4-D12, M0/M1, E2
  4. BIP-8 error monitoring bytes B1 and B2
  5. An assortment of currently unused growth bytes

With regard to the network shown in Figure 3-8, the following are different strategies for transparently transporting the general overhead bytes:

  • Information forwarding: The overhead bytes originating from NE1 are placed into the OC-N signal and remain unmodified in Domain 2.
  • Information tunneling: Tunneling generally refers to the encapsulation of information to be transported at the ingress of a network in some manner and restoring it at the egress. With respect to Figure 3-8, the overhead bytes originating from NE1 are placed in unused overhead byte locations of the signal transported within Domain 2. These overhead bytes are restored before the signal is delivered to NE2.

As an example of forwarding and tunneling, consider Figure 3-9, which depicts four STS-12 signals being multiplexed into an STS-48 signal within Domain 2. Suppose that the J0 byte of each of these four signals has to be transported transparently. Referring to Table 3-1, it can be noted that the J0 space in signals 2–4 of the STS-48 are reserved, that is, no specific purpose for these bytes is defined within Domain 2. Thus, referring to the structure of the multiplexed overhead information shown in Figure 2-5, the J0 bytes from the second, third, and fourth STS-12 signals can be forwarded unmodified through the intermediate network. This is not true for the J0 byte of the first STS-12, however, since the intermediate network uses the J0 byte in the first STS-1 to cover the entire STS-48 signal (Table 3-1). Hence, the J0 byte of the first STS-12 has to be tunneled by placing it in some unused overhead byte in the STS-48 signal at the ingress and recovering it at the egress.

Figure 3-9. Transparency Example to lllustrate Forwarding and Tunneling

Now, consider the error monitoring bytes, B1 and B2. Their usage is described in detail in section 3.6. Briefly, taking SONET as an example, B1 and B2 bytes contain the parity codes for the section and line portion of the frame, respectively. A node receiving these bytes in a frame uses them to detect errors in the appropriate portions of the frame. According to the SONET specification, B1 and B2 are terminated and regenerated by each STE or LTE, respectively. With regard to the network of Figure 3-8, the following options may be considered for their transport across Domain 2:

  • Error regeneration: B1 and B2 are simply regenerated at every network hop.
  • Error forwarding: As before, the B1 and B2 bytes are regenerated at each hop. But instead of simply sending these regenerated bytes in the transmitted frame (as in the previous case), the bytes are XOR’d (i.e., bit wise summed) with the corresponding bytes received. With this process, the B1 or B2 bytes will accumulate all the errors (at the appropriate layer) for the transparently transported signal. The only drawback of this method is that the error counts within Domain 2 would appear artificially high, and to sort out the true error counts, correlation of the errors reported along the transparent signal’s path would be required.
  • Error tunneling: In this case, the incoming parity bytes (B1 and/or B2) are carried in unused overhead locations within the transport signal in Domain 2. In addition, at each network hop where the bytes are required to be regenerated, the tunneled parity bytes are regenerated and then XOR’d (bit wise binary summation) with the error result that was obtained (by comparing the difference between the received and calculated BIP-8s). In this way, the tunneled parity bytes are kept up to date with respect to errors, and the standard SONET/SDH B1 and B2 bytes are used within Domain 2 without any special error correlation/compensation being performed.

3.5.2 Transparency Service Packages

We have so far looked at the mechanisms for providing transparent transport. From the perspective of a network operator, a more important issue is the determination of the types of transparency services that may be offered. A transparency service package defines which overhead functionality will be transparently carried across the network offering the service. As an example, let us consider the network shown in Figure 3-9 again. The following is a list of individual services that could be offered by Domain 2. These may be grouped in various combinations to create different transparency service packages:

  1. J0 transparency: Allows signal identification across Domain 2.
  2. Section DCC (D1–D3) transparency: Allows STE to STE data communication across Domain 2.
  3. B2 and M0/M1 transparency: Allows line layer error monitoring and indication across Domain 2.
  4. K1 and K2 byte transparency: Allow line layer APS across Domain 2. This service will most likely be used with (3) so that signal degrade conditions can be accurately detected and acted upon.
  5. Line DCC (D4-D12) transparency: Allows LTE to LTE data communication across Domain 2.
  6. E2 transparency: Allows LTE to LTE order wire communication across Domain 2.
  7. Miscellaneous section overhead transparency, that is, E1 and F1.

Whether overhead/error forwarding or tunneling is used is an internal decision made by the domain offering the transparency service, based on equipment capabilities and overhead usage. Note that to make use of equipment capable of transparent services, a service provider must know the overhead usage, termination, and forwarding capabilities of equipment used in the network. For example, the latest release of G.707 [ITU-T00a] allows the use of some of the unused overhead bytes for physical layer forward error correction (FEC). Hence, a link utilizing such a “feature” would have additional restrictions on which bytes could be used for forwarding or tunneling.

3.6 When Things Go Wrong

One of the most important aspects built into optical transport systems is their “self-diagnosis” capability. That is, the ability to detect a problem (i.e., observe a symptom), localize the problem (i.e., find where it originated), and discover the root cause of the problem. In fact, SONET and SDH include many mechanisms to almost immediately classify the root cause of problem. This is done by monitoring the signal integrity between peers at a given layer, and also when transferring a signal from a client (higher) layer into a server (lower) layer (Figure 2-17).

In the following, we first consider the various causes of transport problems. Next, we examine how problems are localized and how signal quality is monitored. Finally, we review the methods and terminology for characterizing problems and their duration.

3.6.1 Transport Problems and Their Detection

Signal monitoring functionality includes the following: continuity supervision, connectivity supervision, and signal quality supervision. These are described next.

3.6.1.1 CONTINUITY SUPERVISION

A fundamental issue in telecommunication is ascertaining whether a signal being transmitted is successfully received. Lack of continuity at the optical or electrical layers in SONET/SDH is indicated by the Loss of Signal (LOS) condition. This may arise from either the failure of a transmitter (e.g., laser, line card, etc.) or break in the line (e.g., fiber cut, WDM failure, etc.). The exact criteria for when the LOS condition is declared and when it is cleared are described in reference [ITU-T00b]. For optical SDH signals, a typical criterion is the detection of no transitions on the incoming signal (before unscrambling) for time T, where 2.3 µs ≤ T ≤ 100 µs. An LOS defect is cleared if there are signal transitions within 125 µs. When dealing with other layers, the loss of continuity is discovered using a maintenance signal known as the Alarm Indication Signal (AIS). AIS indicates that there is a failure further upstream in the lower layer signal. This is described further in section 3.6.2.1.

3.6.1.2 CONNECTIVITY SUPERVISION

Connectivity supervision deals with the determination of whether a SONET/SDH connection at a certain layer has been established between the intended pair of peers. This is particularly of interest if there has been an outage and some type of protection or restoration action has been taken. A trail trace identifier is used for connection supervision. Specifically,

  • The J0 byte is used in the SONET section (SDH regenerator section) layer. The section trace string is 16 bytes long (carried in successive J0 bytes) as per recommendation G.707 [ITU-T00a].
  • The J1 byte is used in the SONET/SDH higher-order path layer (e.g., SONET STS-1 and above). The higher-order path trace string could be 16 or 64 bytes long as per recommendation G.707 [ITU-T00a].
  • The J2 byte is used in the SONET/SDH lower-order path layer (e.g., SONET VT signals). The lower-order path trace string is 16 bytes long as per recommendation G.707 [ITU-T00a].

For details of trail trace identifiers used for tandem connection monitoring (TCM), see recommendations G.707 [ITU-T00a] and G.806 [ITU-T00c]. The usage of this string is typically controlled from the management system. Specifically, a trace string is configured in the equipment at the originating end. An “expected string” is configured at the receiving end. The transmitter keeps sending the trace string in the appropriate overhead byte. If the receiver does not receive the expected string, it raises an alarm, and further troubleshooting is initiated.

3.6.1.3 SIGNAL QUALITY SUPERVISION

Signal quality supervision determines whether a received signal contains too many errors and whether the trend in errors is getting worse. In SONET and SDH, parity bits called Bit Interleaved Parity (BIP) are added to the signal in various layers. This allows the receiving end, known as the near-end, to obtain error statistics as described in section 3.6.3. To give a complete view of the quality of the signal in both directions of a bidirectional line, the number of detected errors at the far-end (transmitting end) may be sent back to the near-end via a Remote Error Indicator (REI) signal.

The following bits and bytes are used for near-end signal quality monitoring under SONET and SDH:

  • SONET section (SDH regenerator section) layer: The B1 byte is used to implement a BIP-8 error detecting code that covers the previous frame.
  • SONET line (SDH multiplex section) layer: In the case of SDH STM-N signals, a BIP N × 24 composed of the 3 STM-1 B2 bytes is used. In the case of SONET STS-N, a BIP N × 8 composed of the N B2 bytes is used. These cover the entire contents of the frame excluding the regenerator section overhead.
  • SONET path (SDH HOVC) layer: The B3 byte is used to implement a BIP-8 code covering all the bits in the previous VC-3, VC-4, and VC-4-Xc.
  • SONET VT path (SDH LOVC) layer. Bits 1 and 2 of the V5 byte are used to implement a BIP-2 code covering all the bits in the previous VC-1/2.

SONET/SDH provides the following mechanisms for carrying the REI information. For precise usage, see either T1.105 [ANSI-95a] or G.707 [ITU-T00a].

  • Multiplex section layer REI: For STM-N (N = 0, 1, 4, 16), 1 byte (M1) is allocated for use as Multiplex Section REI. For STM-N (N = 64 and 256), 2 bytes (M0, M1) are allocated for use as a multiplex section REI. Note that this is in line with the most recent version of G.707 [ITU-T00a].
  • Path layer REI: For STS (VC-3/4) path status, the first 4 bits of the G1 path overhead are used to return the count of errors detected via the path BIP-8, B3. Bit 3 of V5 is the VT Path (VC-1/2) REI that is sent back to the originating VT PTE, if one or more errors were detected by the BIP-2.

3.6.1.4 ALIGNMENT MONITORING

When receiving a time division multiplexed (TDM) signal, whether it is electrical or optical, a critically important stage of processing is to find the start of the TDM frame and to maintain frame alignment. In addition, when signals are multiplexed together under SONET/SDH, the pointer mechanism needs to be monitored.

Frame Alignment and Loss of Frame (LOF)

The start of an STM-N (OC-3N) frame is found by searching for the A1 and A2 bytes contained in the STM-N (OC-3N) signal. Recall that the A1 and A2 bytes form a particular pattern and that the rest of the frame is scrambled. This framing pattern is continuously monitored against the assumed start of the frame. Generally, the receiver has 625 µs to detect an out-of-frame (OOF) condition. If the OOF state exits for 3 ms or more then a loss of frame (LOF) state will be declared. To exit the LOF state, the start of the frame must be found and remain valid for 3 ms.

Loss of Multiframe

SDH LOVCs and SONET VTs use the multi-frame structure described earlier. The 500 µs multiframe start phase is recovered by performing multiframe alignment on bits 7 and 8 of byte H4. Out-of-multiframe (OOM) is assumed once when an error is detected in the H4 bit 7 and 8 sequence. Multiframe alignment is considered recovered when an error-free H4 sequence is found in four consecutive VC-n (VT) frames.

Pointer Processing and Loss of Pointer (LOP)

Pointer processing in SONET/SDH is used in both the HOVC (STS path) and LOVC (VT path) layers. This processing is important in aligning payload signals (SDH VC or SONET paths) into their containing signals (STM-N/OC-3N). Without correct pointer processing, essentially one per payload signal, the payload signal is essentially “lost.” Hence, pointer values are closely monitored as part of pointer processing [ITU-T00a, ITU-T00b]. A loss of pointer state is declared under severe error conditions.

3.6.2 Problem Localization and Signal Maintenance

Once a problem has been detected, its exact location has to be identified for the purposes of debugging and repair. SONET/SDH provides sophisticated mechanisms to this in the form of Alarm Indication Signals (AIS) and the Remote Defect Indication (RDI). These are described below.

3.6.2.1 ALARM INDICATION SIGNALS

Suppose that there is a major problem with the signal received by an intermediate point in a SONET network. In this case, a special Alarm Indication Signal is transmitted in lieu of the normal signal to maintain transmission continuity. An AIS indicates to the receiving equipment that there is a transmission interruption located at, or upstream, of the equipment originating the AIS. Note that if the AIS is followed upstream starting from the receiver, it will lead to the location of the error. In other words, the AIS signal is an important aid in fault localization. It is also used to deliver news of defects or faults across layers.

SONET STE will originate an Alarm Indication Signal-Line (AIS-L) (MS AIS in SDH) upon detection of an LOS or LOF defect. There are two variants of the AIS-L signal. The simplest is a valid section overhead followed by “all ones” pattern in the rest of the frame bytes (before scrambling). To detect AIS-L, it is sufficient to look at bits 6, 7, and 8 of the K2 byte and check for the “111” pattern. A second function of the AIS-L is to provide a signal suitable for normal clock recovery at downstream STEs and LTEs. See [ANSI95a] for the details of the application, removal, and detection of AIS-L.

SONET LTE will generate an Alarm Indication signal-Path (AIS-P) upon detection of an LOSLOFAIS-L, or LOP-P defect. AIS-P (AU AIS in SDH) is specified as “all ones” in the STS SPE as well as the H1, H2, and H3 bytes. STS pointer processors detect AIS-P as “111…” in bytes H1 and H2 in three consecutive frames.

SONET STS PTE will generate an Alarm Indication signal-VT (AIS-V) for VTs of the affected STS path upon detection of an LOSLOFAIS-L, LOP-P, AIS-P, or LOP-V defect. The AIS-V signal is specified as “all ones” in the entire VT, including the V1-V4 bytes. VT pointer processors detect AIS-V as “111…” in bytes V1 and V2 in three consecutive VT superframes.

The SDH AIS signals for its various layers are nearly identical as those of SONET in definition and use as shown in Table 3-5.

3.6.2.2 REMOTE DEFECT INDICATION

Through the AIS mechanism, SONET allows the downstream entities to be informed about problems upstream in a timely fashion (in the order of milliseconds). The AIS signal is good for triggering downstream protection or restoration actions. For quick recovery from faults, it is also important to let the upstream node know that there is a reception problem downstream. The Remote Defect Indication (RDI) signal is used for this purpose. The precise definition of RDI, as per [ANSI95a], is

Table 3-5. SDH AIS Signals by Layer

 

Layer Type AIS Overhead AIS Activation Pattern AIS Deactivation Pattern
MSn MSAIS K2, bits 6 to 8 “111” ≠ “111”
VC-3/4 AU-AIS H1, H2 See Annex A/G.783 [ITU-T00b]
VC-3/4 TCM IncAIS N1, bits 1 to 4 “1110” ≠ “1110”
VC-11/12/2 TUAIS V1, V2 S11/12/2 (VC-11/12/2)
VC-11/12/2 TUAIS V1, V2 See Annex A/G.783 [ITU-T00b]
VC-11/12/2 TCM IncAIS N2, bit 4 “1” “0”

 

A signal transmitted at the first opportunity in the outgoing direction when a terminal detects specific defects in the incoming signal.

At the line level, the RDI-L code is returned to the transmitting LTE when the receiving LTE has detected an incoming line defect. RDI-L is generated within 100 ms by an LTE upon detection of an LOSLOF, or AIS-L defect. RDI-L is indicated by a 110 code in bits 6,7,8 of the K2 byte (after unscrambling).

At the STS path level, the RDI-P code is returned to the transmitting PTE when the receiving PTE has detected an incoming STS path defect. There are three classes of defects that trigger RDI-P:

  1. Payload defects: These generally indicate problems detected in adapting the payload being extracted from the STS path layer.
  2. Server defects: These indicate problems in one of the layers responsible for transporting the STS path.
  3. Connectivity defects: This only includes the trace identifier mismatch (TIM) or unequipped conditions.

Table 3-6 shows current use of the G1 byte for RDI-P purposes (consult [ANSI95a] for details).

The remote defect indication for the VT path layer, RDI-V, is similar to RDI-P. It is used to return an indication to the transmitting VT PTE that the receiving VT PTE has detected an incoming VT Path defect. There are three classes of defects that trigger RDI-V:

Table 3-6. Remote Defect Indicator—Path (RDI-P) via the G1 Byte

 

G1, bit 5 G1, bit 6 G1, bit 7 Meaning
0 1 0 Remote payload defect
0 1 1 No remote defect
1 0 1 Server defect
1 1 0 Remote connectivity defect

 

  1. Payload defects: These generally indicate problems detected in adapting the payload being extracted from the VT path layer.
  2. Server defects: These generally indicate problems in the server layers to the VT path layer.
  3. Connectivity defects: These generally indicate that there is a connectivity problem within the VT path layer.

For more information, see [ANSI95a] for details. RDI-V uses the Z7 bytes (bits 6 and 7).

One thing to note about RDI signals is that they are “peer to peer” indications, that is, they stay within the layer that they are generated. The AIS and RDI signals form the “fast” notification mechanisms for protection and restoration, that is, these are the primary triggers. Examples of their usage are given in the next chapter. The RDI signals in various SDH layers are nearly identical to those of SONET and they are summarized in Table 3-7.

Table 3-7. RDI Signals for Various SDH Layers

 

Layer Type RDI/ODI Overhead RDI/ODI Activation Pattern RDI/ODI Deactivation Pattern
MSn RDI K2, bits 6 to 8 “110” ≠ “110”
S3D/4D (VC-3/4 TCM option 2) RDI N1, bit 8, frame 73 “1” “0”
S11/12/2 (VC-11/12/2) RDI V5, bit 8 “1” “0”
S11D/12D/2D (VC-11/12/2 TCM) RDI N2, bit 8, frame 73 “1” “0”

 

3.6.3 Quality Monitoring

3.6.3.1 BLIPS AND BIPS

The bit error rates are typically extremely low in optical networks. For example, in 1995, the assumed worst-case bit error rate (BER) for SONET regenerator section engineering was 10-10, or one error per 10 billion bits. Today, that would be considered quite high. Hence, for error detection in a SONET frame, we can assume very few bit errors per frame.

As an example, the number of bits in an STS-192 frame is 1,244,160 (9 rows × 90 columns per STS-1 x 8 bits/byte × 192 STS-1). With a BER of 10-10, it can be expected that there will be one bit error in every 8038 frames. The probability of two errors in the same frame is fairly low. Since the bit rate of an STS-192 signal is 10 Gbps (or 1010 bits per second), a BER of 10-10 gives rise to one bit error every second on the average. This is why a BER of 10-10 is considered quite high today.

Figure 3-10 shows the general technique used in SONET and SDH for monitoring bit errors “in-service” over various portions of the signal. This method is known as the Bit Interleaved Parity 8 Bits, or BIP-8 for short. Although the name sounds complex, the idea and calculation are rather simple. In Figure 3-10, X1-X5 represents a set of bytes that are being checked for transmission errors. For every bit position in these bytes, a separate running tally of the parity (i.e., the number of 1s that occur) is kept track of. The corresponding bit position of the BIP-8 byte is set to “1” if the parity is currently odd and a zero if the parity is even. The BIP-8 byte is sent, typically in the following frame, to the destination. The destination recomputes the BIP-8 code based on the contents of the received frame and compares it with the BIP-8 received. If there are no bit errors, then these two codes should match. Figure 3-10(b) depicts the case where one of the bytes, X2, encounters a single bit error during transmission, that is, bit 2 changes from 1 to 0. In this case, the received BIP-8 and the recomputed BIP-8 differ by a single bit and, in fact, the number of differing bits can be used as an estimate of the number of bit errors.

Figure 3-10. Example of BIP-8 Calculation and Error Detection

Note that the BIP-8 technique works well under the assumption of low bit error rates. The study of general mechanisms for error detection and correction using redundant information bits is known as algebraic coding theory (see [Lin+83]).

BIP-8 is used for error monitoring in different SONET/SDH layers. At the SONET section layer, the B1 byte contains the BIP-8 calculated over all the bits of the previous STS-N frame (after scrambling). The computed BIP-8 is placed in the B1 byte of the first STS-1 (before scrambling). This byte is defined only for the first STS-1 of an STS-N signal. SDH uses this byte for the same purpose. Hence, the BIP-8 in this case is calculated over the entire SONET frame and covers a different number of bytes for different signals, for example, STS-12 vs. STS-192.

At the SONET line layer, BIP-8 is calculated over all the bits of the line overhead and the STS-1 SPE (before scrambling). The computed BIP-8 is placed in the B2 byte of the next STS-1 frame (before scrambling). This byte is separately computed for all the STS-1 signals within an STS-N signal. These N BIP-8 bytes are capable of detecting fairly high bit error rates, up to 10-3. To see this, consider an STS-1 line signal (i.e., an STS-1 frame without section layer overhead). The number of bytes in this signal is 804 (9 rows × 90 columns – 6 section bytes). Each bit in the line BIP-8 code is used to cover 804 bits (which are in the corresponding bit position of the 804 bytes in the line signal). Since a BER of 10-3 means an average of one bit error every 1000 bits, there will be less than one bit error in 804 bits (on the average). This, the line BIP-8 code is sufficient for detecting these errors. Note, however, that BIP-8 (and any parity based error detection mechanism) may fail if there are multiple, simultaneous bit errors.

At the STS Path level, BIP-8 is calculated over all the bits of the previous STS SPE (before scrambling) and carried in the B3 path overhead byte. SDH uses this byte for the same purpose but excludes the fixed stuff bytes in the calculation. The path BIP-8, like the section BIP-8, covers a different number of bytes depending on the size of the STS path signal, that is, STS-3 vs. STS-12.

At the VT path level, 2 bits of the VT path level overhead byte V5 are used for carrying a BIP-2. The technique for this is illustrated in Figure 3-11. To save on overhead, the parity counts over all the odd and the even bit positions are combined and represented by the two bits of the BIP-2 code, respectively. Recall that the VT SPE is a multiframe spanning four SONET frames. The BIP-2 is calculated over all bytes in the previous VT SPE, including all overhead but the pointers (Figure 3-11).

Figure 3-11. BIP Calculation at the VT Path Level

Let us examine how effective the BIP-2 code is. The number of bits in the VT1.5 SPE is 832 ([(9 rows × 3 columns) – 1 pointer byte] × 8 bits/byte × 4 frames per SPE). Each bit of the BIP-2 code covers half the bits in the VT1.5 SPE, that is, 416 bits. Hence, BIP-2 can handle error rates of 1 in 500 bits (BER between 10-2 and 10-3). Now, a VT6 is four times the size of the VT1.5. In this case, each parity bit covers 1664 bits, handling a BER slightly worse than 10-4.

3.6.4 Remote Error Monitoring

The error monitoring capabilities provided by SONET and SDH enables the receiver to know the error count and compute the BER on the received signal at various layers. Based on this information, it is useful to let the sender learn about the quality of the signal received at the other end. The following mechanisms are used for this purpose.

The STS-1 line REI (M0 byte) is used by the receiver to return the number of errored bits detected at the line layer to the sender. The receiver arrives at this number by considering the difference between the received and the recomputed BIP-8 (B2) codes. In the case of an STS-N signal, the M1 byte is used for conveying the REI information. Clearly, up to 8 × N errors could be detected with STS-N BIP-8 codes (as each STS-1 is covered by its own BIP-8). But only a count of at most 255 can be reported in the single M1 byte. Thus, in signals of OC-48 and higher rates, the number 255 is returned when 255 or more errors are detected.

At the path layer, the receiver uses the first four bits of the G1 path overhead to return the number of errors detected (using the path BIP-8) to the sender. At the VT path layer, the receiver uses bit 3 of the V5 byte to indicate the detection of one or more errors to the sender.

3.6.5 Performance Measures

When receiving word of a problem, one is inclined to ask some general questions such as: “How bad is it? “How long has it been this way?” and “Is it getting worse or better?” The following terminology is used in the transport world. An anomaly is a condition that gives the first hint of possible trouble. A defect is an affirmation that something has indeed gone wrong. A failure is a state where something has truly gone wrong. Whether an event notification or an alarm is sent to a management system under these conditions is a separate matter. Performance parameters in SONET and SDH are used to quantify these conditions.

SONET or an SDH network element supports performance monitoring (PM) according to the layer of functionality it provides. A SONET network element accumulates PM data based on overhead bits at the Section, Line, STS Path, and VT Path layers. In addition, PM data are available at the SONET Physical layer using physical parameters. The following is a summary of different performance parameter defined in SONET. Similar performance parameters are also monitored and measured in SDH. For a detailed treatment on PM parameters on SONET refer to [Telcordia00].

Physical Layer Performance Parameters

The physical layer performance measurement enables proactive monitoring of the physical devices to facilitate early indication of a problem before a failure occurs. Several physical parameters are measured, including laser bias current, optical power output by the transmitter, and optical power at the receiver. Another important physical layer parameter is the Loss of Signal (LOS) second, which is the count of 1-second intervals containing one or more LOS defects.

Section Layer Performance Parameters

The following section layer performance parameters are defined in SONET. Note that all section layer performance parameters are defined for the near-end. There are no far-end parameters at the Section layer.

  • Code Violation (CV-S): The CV-S parameter is a count of BIP errors detected at the section layer. Up to eight section BIP errors can be detected per STS-N frame.
  • Errored Second (ES-S): The ES-S parameter is a count of the number of 1 second intervals during which at least one section layer BIP error was detected, or an SEF (see below) or LOS defect was present.
  • Errored Second Type A (ESA-S) and Type B (ESB-S): ESA-S is the count of 1-second intervals containing one CV-S, and no SEF or LOS defects. ESB-S is the count of 1-second intervals containing more than one but less than X CV-S errors, and no SEF or LOS defects. Here, X is a user-defined number.
  • Severely Errored Second (SES-S): The SES-S parameter is a count of 1-second intervals during which K or more Section layer BIP errors were detected, or an SEF or LOS defect was present. K depends on the line rate and can be set by the user.
  • Severely Errored Frame Second (SEFS-S): The SEFS-S parameter is a count of 1-second intervals during which an SEF defect was present. An SEF defect is detected when the incoming signal has a minimum of four consecutive errored frame patterns. An SEF defect is expected to be present when an LOS or LOF defect is present. But there may be situations when this is not the case, and the SEFS-S parameter is only incremented based on the presence of the SEF defect.

Line Layer Performance Parameters

At the SONET line layer, both near-end and far-end parameters are monitored and measured. Far-end line layer performance is conveyed back to the near-end LTE via the K2 byte (RDI-L) and the M0 or M1 byte (REI-L). Some of the important near-end performance parameters are defined below. The far-end parameters are defined in a similar fashion.

  • Code Violation (CV-L): The CV-L parameter is a count of BIP errors detected at the line layer. Up to 8N BIP errors can be detected per STS-N frame.
  • Errored Second (ES-L): The ES-L parameter is a count of 1-second intervals during which at least one line layer BIP error was detected or an AIS-L defect is present.
  • Errored Second Type A (ESA-L) and Type B (ESB-L): ESA-L is the count of 1-second intervals containing one CV-L error and no AIS-L defects. ESB-L is the count of 1-second intervals containing X or more CV-L errors, or one or more AIS-L defects. Here, X is a user-defined number.
  • Severely Errored Second (SES-L): The SES-L parameter is a count of 1-second intervals during which K or more line layer BIP errors were detected, or an AIS-L defect is present. K depends on the line rate and can be set by the user.
  • Unavailable Second (UAS-L): Count of 1-second intervals during which the SONET line is unavailable. The line is considered unavailable after the occurrence of 10 SES-Ls.
  • AIS Second (AISS-L): Count of 1-second intervals containing one or more AIS-L defects.

Path Layer Performance Parameters

Both STS path and VT path performance parameters are monitored at the path layer. Also, both near-end and far-end performance parameters are measured. Far-end STS path layer performance is conveyed back to the near-end STS PTE using bits 1 through 4 (REI-P) and 5 through 7 (RDI-P) of the G1 byte. Far-end VT path layer performance is conveyed back to the near-end VT PTE using bit 3 of the V5 byte (REI-V), and either bits 5 through 7 of the Z7 byte or bit 8 of the V5 byte (RDI-V). Some of the important near-end STS path performance parameters are defined below. The far-end parameters are defined in a similar fashion.

  • Code Violation (CV-P): Count of BIP-8 errors that are detected at the STS-path layer.
  • Errored Second (ES-P): Count of 1-second intervals containing one or more CV-P errors, one or more AIS-P, LOP-P, TIM-P, or UNEQ-P defects.
  • Errored Second Type A (ESA-P) and Type B (ESB-P): ESA-P is the count of 1-second intervals containing one CV-P error and no AIS-P, LOP-P, TIM-P, or UNEQ-P defects. ESB-P is the count of 1-second intervals containing more than one but less than X CV-P errors and no AIS-P, LOP-P, TIM-P, or UNEQ-P defects. Here, X is a user-defined number.
  • Severely Errored Second (SES-P): Count of 1-second intervals containing X or more CV-P errors, one or more AIS-P, LOP-P, TIM-P, or UNEQ-P defects. Here, X is a user-defined number.
  • Unavailable Second (UAS-P): Count of 1-second intervals during which the SONET STS-path is unavailable. A path is considered unavailable after the occurrence of 10 SESs.
  • Pointer Justification Counts: To monitor the adaptation of the path payloads into the SONET line, the pointer positive and negative adjustment events are counted. The number of 1-second intervals during which a pointer adjustment event occurs is also kept track of.
  • 3.7 Summary
  • SONET and SDH-based optical transport networks have been deployed extensively. It is therefore important to understand the fundamentals of these technologies before delving into the details of the control plane mechanisms. After all, the optical network control plane is a relatively recent development. Its primary application in the near term will be in SONET/SDH networks. In this context, it is vital to know about the low-level control mechanisms that already exist in SONET and SDH and how they help in building advanced control plane capabilities. The next chapter continues with a description of another key topic relevant to the control plane, that is, protection and restoration mechanisms in SONET and SDH networks. Following this, the subject of modern optical control plane is dealt with in earnest.

Virtual Concatenation: Knowing the Details .
They say the devil is in the details. That’s certainly the case when dealing with virtual concatenation. Clearly, designers at the chip, equipment, and carrier level have touted the wonders that virtual concatenation delivers. But, what often gets lost in these discussions are the real challenges that chip and equipment developers will face when implementing virtual concatenation in a real-world design.

In this two-part series, we’ll examine the design issues that developers will encounter when implementing virtual concatenation in a system level design. In Part 1, we’ll examine the basic benefits of virtual concatenation, the difference between high- and low-order virtual concatenation pipes, and differential delay issues. In Part 2, we’ll take a detailed look at the link capacity adjustment scheme (LCAS).

Why VC is So hot
Much has already been said and written about the benefits of virtual concatenation over current payload mapping capabilities of Sonet and SDH. Table 1 summarizes the individual payload capacities of different commonly used Sonet or SDH paths. The table includes both high- and low-order paths with and without standard contiguous concatenation (denoted by the “c”).

While allowing a range of bandwidths to be provisioned, these current mappings do not have the granularity required to make efficient use of the existing network infrastructure. One other important point to note is that contiguous concatenation of VT1.5/VC-11s or VT2/VC-12s is not supported.

Table 1: Current Sonet and SDH Payload Capacities

 

Container (Sonet/SDH) Type Payload Capacity (Mbit/s)
VT1.5/VC 11 Low Order 1.600
VT2/VC 12 Low Order 2.176
STS-1/VC 3 High Order 48.384
STS-3c/VC 4 High Order 149.76
STS-12c/VC 4-4c High Order 599.04
STS-24c/VC 4-8c High Order 1198.08
STS-48c/VC 4-16c High Order 2396.16
STS-192c/VC 4-64c High Order 9584.64

 

Table 2 lists the payload capacities possible with virtual concatenation. As is shown, concatenation of VT1.5/VC-11s or VT2/VC-12s is supported and the concatenation of high-order paths is much more granular.

Table 2: Virtual Concatenation Payload Capacities

 

Container (Sonet/SDH) Type Payload Capacity (Mbit/s)
VT1.5 Xv/VC 11 Xv Low Order X x 1.600 (X=1.64)
VT2 Xv/VC 12 Xv Low Order X x 2.176 (X = 1.64)
STS-1-Xv/VC 3-Xv High Order X x 48.384 (X = 1.256)
STS-3-Xv/VC 4-Xv High Order X x 149.76 (X = 1.256)

 

Making Use of Unused Overhead
In addition to allowing more flexible mapping, virtual concatenation also releases two of the restrictions upon which contiguous concatenation relies to reconstruct the signal being carried. These are phase alignment of the members of the concatenation and an inherent sequence order of the members. Consequently, in order to reconstruct the original signal from a virtually concatenated group (VCG), it is necessary to determine the phase alignment and sequence of the received members. The information required to support this is carried in previously unused Sonet/SDH path overhead, which is overhead that is generated by a payload mapper and effectively stays intact regardless of how the payload makes its way through the network to its destination. Note: In SDH, tandem connection monitoring involves the modification of some Path Overhead at an intermediate point.

To put this in context, Figure 1 illustrates the high- and low-order paths and their path overhead. For high-order paths, virtual concatenation uses the H4 byte while for low-order paths, virtual concatenation uses bit 2 of the Z7/K4 byte.

Figure 1: High- and low-order paths and overhead.

For both high- and low-order paths, the information required is structured in a multi frame format as shown in Figure 2. For high-order paths, the multi frame structure is defined by the virtual concatenation overhead carried in the H4 byte. For the low-order paths, on the other hand, the multi frame structure is phase aligned with the multi frame alignment signal (MFAS) of bit 1 of the Z7/K4 byte that carries the extended signal label.

Figure 2: Virtual concatenation multi frame formats.

High-Order Overhead
In high-order paths, the H4 multi frame structure is 16 frames long for a total of 2 ms. Within this structure, there are two multi frame indicators—MFI1 and MFI2. MFI1 is a 4-bit field which increments every frame while MFI2 is an 8-bit field which increments every multi frame.

The most significant and least significant nibbles of MFI2 are sent over the first two frames of a multi frame. Together with MFI1, they form a 12-bit field that rolls over every 512 ms (4096 x 125 μs). This allows for a maximum differential path delay of less than 256 ms to ensure that it is always possible to determine which members of a VCG arrive earliest (shortest network delay) and which members arrive latest (longest network delay).

If the differential delay were 256 ms or more, it would not be possible to know if a member with an {MFI2,MFI1}=0 is 256 ms behind or 256 ms ahead of a member with an {MFI2,MFI1}=2048.

The second piece of information conveyed in the H4 byte is the sequence indicator (SQ). This is an 8-bit field that, like the MFI2, is sent a nibble at a time over two frames in the multi frame. In this case if SQ, it is sent over the last two frames. Consequently, a high-order VCG can contain up to 256 members.

The number of members is obviously limited by the number of paths available in the transport signal. Thus, a 40-Gbit pipe would have to be a reality to have 256 STS 1 or VC 3 members. Referring back to the payload capacities of Table 1, for STS 1 256v/VC 3 256vs, the payload capacity is 256 x 48.384 Mbit/s = 12,386.304 Mbit/s. For STS 3c 256v/VC 4 256v, the payload capacity would be close to 50 Gbit/s.

Low-Order Overhead
Figure 1 above shows that low-order paths have an inherent multi frame structure of 4 Sonet/SDH frames (or 500 μs). As illustrated in Figure 2, the virtual concatenation multi frame structure, delineated by the MFAS pattern in the extended signal label bit (bit 1 of K4), is 32 of these 500 μs multi frames for a total VC multi frame (or should we say multi multi frame) duration of 16 ms.

Within the virtual concatenation multi-frame structure of bit 2 of the K4 byte, again, there is a multi frame indicator (MFI) and an SQ. In this case, the MFI is a 5-bit field sent over the first five 500 μs multi frames of the VC multi frame that rolls over every 512 ms (32 x 16ms). Again, this permits a maximum differential delay across all members of a low-order VCG of less than 256 ms.

The SQ for LO paths is a 6-bit field which is transmitted over virtual concatenation multi frames 6 through 11 allowing for up to 64 members on a low-order VCG. Again, using the values in Table 2, for VT1.5 64v/VC 11 64v, the payload capacity is 102.4 Mbit/s and the payload capacity of a VT2 64v/VC 12 64v is 139.264 Mbit/s.

Differential Delay alignment
When data is mapped into a VCG, it is essentially ‘demultiplexed’, on a byte by byte basis, across the members of the VCG in the sequence provisioned (reflected by the SQ bytes of each member). At the destination, these discrete paths must be ‘remultiplexed’ to form the original signal. Allowance for differential delay across the members of a VCG implies that all members must be delayed to that of the maximum member such that the ‘remultiplexing’ can be performed correctly.

As a concept, differential delay alignment is not particularly complex. Each member has its data written into a buffer upon reception along with some kind of indication as to where the MFI boundaries are. Data for a given MFI is then read out of each buffer, thus creating phase alignment of the members. The depth of each buffer (the difference between the read and write pointers) is a measure of the difference in delay between that member and the member that has the most network delay.

The main issue with differential delay is the amount of buffer space required. Designers can calculate the amount of buffer space required using the maximum number of members supported. For example, each VT1.5/VC 11 has a payload capacity of 1.6Mbit/s. The worst case is that a member would have to be delayed by just under 256 ms which represents 1.6 Mbit/s x 0.256 s = 400 kbit. Similarly, an STS 1/VC 3 requires a maximum payload capacity of 1.48 Mbit.

These numbers may not seem significant until one considers the number of paths in a given transport signal. Table 3 shows the memory requirements for some potential combinations of virtual concatenation path types and the transport signals that may carry them. Note: the calculations in Table 3 reflect maximum buffer sizes on all paths assuming only payload data is buffered. At least one member of each VCG, by definition, will have minimal buffering so the actual requirements will be slightly lower. If any Path Overhead is also buffered, then the requirements may rise.

Table 3: Virtual Concatenation Delay Buffer Requirements for Various Transport Signals

 

Virtual Concatenation Path Type Transport Signal Number of Paths total Delay Buffer Size
VT1.5/VC 11 STS-3/STM-1 84 33 Mbit
VT1.5/VC 11 STS-12/STM-4 336 131 Mbit
VT2/VC 12 STS-3/STM-1 63 33.5 Mbit
VT2/VC 12 STS-3/STM-4 252 134 Mbit
STS-1/VC-3 STS-12/STM-4 12 142 Mbit
STS-1/VC-3 STS-48/STM-16 48 567 Mbit
STS-3c/VC-4 STS-12/STM-4 4 146 Mbit
STS-3c/VC-4 STS-48/STM-16 12 585 Mbit

 

It is clear from Table 3 that, even for low bandwidth mapping/demapping devices (STS-3/STM-1) that support virtual concatenation, it is impractical to provide on-board buffers allowing for 256 ms of differential delay.

The obvious way to solve this problem is to equip mapping/demapping devices with interfaces to external memory that is large enough to hold the amounts of data listed above. Again this sounds straightforward but there is another consideration that complicates the solution. The data transfer rate between the mapper/demapper and the external buffer memory is twice that of the transport signal rate. This is because the data must be both written to and read from the buffers at the transport signal rate. For an OC 48/STM 16 this amounts to close to 5 Gbit/s. Even with 32-bit wide memory, this results in approximately 150 Mtransfers/s.

The memory options that support these rates are not plentiful. Essentially, these devices must support external SDRAM or SRAM. SDRAM may seem like a good solution due to the large capacities available and the apparent speed that DDR and QDR SDRAMs can support. These speeds can only be achieved, however, if access to the memory involves sustained bursts to sequential memory blocks where successive blocks sit in different pages within the SDRAM structure. This can’t easily be guaranteed, as the allocation of memory is entirely dependent on the type, number and delay supported of the members all VCGs being terminated by the device.

SRAMs, on the other hand, can easily keep up with the transfer rates required with no restriction on the order that data is either written or read but capacities of 500 Mbit can be prohibitive in cost and real estate. Consequently, component vendors must choose carefully how much differential delay and what type of external memory their mapper/demapper devices will support.

Virtual Concatenation: Knowing the Details continues
The hype behind virtual concatenation has been growing for more than a year now. And the link capacity adjustment scheme is one of the reasons why. LCAS enhances the capabilities provided by virtual concatenation, allowing operators to adjust virtually concatenated groups (VCGs) on the fly, thus improving network utilization even further.

But, like virtual concatenation, LCAS implementation can be quite challenging for today’s chip and equipment designers. In Part 1, we looked at virtual concatenation and the implementation issues designers will face using this technology. Now, in Part 2, we’ll focus our attention on describing how LCAS works and the design issues engineers will face when using this technology in a chip or system design.

Understanding LCAS
The link capacity adjustment scheme (LCAS) mainly attempts to address two of the tricky issues associated with virtual concatenation: ability to increase or decrease the capacity of a VCG and the ability to deal gracefully with member failures.

With LCAS, not all members of a VCG need to be active in order to pass data from the source (So), to the Sink (Sk). Once a VCG is defined, the So and Sk equipment are responsible for agreeing which members will carry traffic. There are also procedures that allow them to agree to remove or add members at any time. To achieve this, signaling between the source and sink is required and some of the reserved fields in the virtual concatenation overhead are used for this purpose.

Within LCAS, a control packet is defined that carries the following fields:

  • Member status (MST)
  • Re-sequence acknowledge (RS-Ack)
  • Control (CTRL)
  • Group ID (GID)
  • CRC-3/CRC-8 (3 for LO, 8 for HO)

The position of these fields within the VC multi-frames for high- and low-order paths are shown in Figure 3. Note that, for high-order paths, the control packet begins with the MST field in MFI n and ends with the CRC-8 field in MFI n+1.

Figure 3: Signaling overhead associated with LCAS.

The MST field provides a means of communicating, from the Sk to the So, the state of all received VCG members. The state for each member is either OK or FAIL (1 bit). Since there are potentially more members than bits in the field in a given VC multi-frame, it takes 32 high-order virtual concatenation multi-frames and 8 low-order virtual-concatenation multi-frames to signal the status of all members. This signaling allows the Sk to indicate to the So that a given member has failed and may need be removed from the list of active members of the VCG.

The RS-Ack field is a bit that is toggled by the Sk to indicate to the So that changes in the sequence numbers for that VCG have been evaluated. It also signals to the So that the MST information in the previous multi-frame is valid. With this signaling, the So can be informed that the changes it has requested (either member addition or removal) have been accepted by the Sk.

The MST and RS-Ack fields are identical in all members of the VCG upon transmission from the Sk.

The Control Field
The control field allows the So to send information to the Sk describing the state of the link during the next control packet. Using this field, the So can signal that the particular path should be added (ADD) to the active members, be deleted (or remain deleted) from the active members (IDLE) or should not be used due to a failure detected at the Sk (DNU). It can also indicate that the particular path is an active member (NORM) or the active member with the highest SQ (EOS). Finally, for compatibility with non-LCAS VCAT the CTRL field can indicate that fixed bandwidth is used (FIXED).

The Group ID field provides a means for the receiver to determine that all members of a received VCG come from the same transmitter. The field contains a portion of a pseudo-random bit sequence (215–1). This value is the same for all members of a VCG at any given MFI.

Finally, the CRC field provides a means to validate the control packet received before acting on it. In this way, the signaling link is tolerant of bit errors.

Basic LCAS Operation
When a VCG is initiated, all MSTs generated by the Sk are set to FAIL. It is then the responsibility of the So to add members to the VCG to establish data continuity. The So can set the initial SQ numbers of multiple members and set their CTRL fields to ADD. The Sk will then set all the corresponding MSTs to OK.

The first MST recognized by the So has its SQ renumbered to the lowest value and this re-sequence will be transmitted to the Sk. Multiple members can be recognized at the same time by the So and the re-sequence may involve more than one member.

The Sk acknowledges the re-sequence by toggling the RS-Ack in all members. After the RS-Ack is received by the So, it will set CTRL field for the corresponding members to NORM with highest SQ member being set to EOS. This process continues until all members have been added to the active group. At this point, the CTRL field for all but one added members will be NORM. The member with the highest SQ will have its CTRL field set to EOS.

Adding, Deleting Members
When members are to be added or deleted, the sequence is similar. The CTRL field for the member or members in question will be set by the So to either ADD or IDLE depending on the operation requested. The Sk will then respond with MST values of either OK or FAIL respectively. Again, the order that the updated MST values are seen and confirmed by the So will determine how the SQ values are updated.

In the event of a network failure resulting in a member failure, the Sk will set the corresponding MST (or MSTs) to FAIL. The So, upon seeing and confirming the status of this member (or members), will set the CTRL field for that member (or those members) to DNU.

If the last member has an MST of FAIL, then the next previous member that remains active will have its CTRL field changed from NORM to EOS. In the event that the failure is repaired, the MST (or MSTs) will be updated by the Sk to OK. At this point, the So can update the CTRL value (or values) to NORM to indicate that the member (or members) will again carry traffic at the next boundary.

In all cases, bandwidth changes take place on the high-order frame or low-order multi-frame following the reception of the CRC of the control packet where the CTRL fields change to or from NORM. Specifically, this is synchronized with the first payload byte after the J1 or J2 following the end of the control packet. This byte will be the first one either filled with data in the case of an added member or the first one left empty in the case of a deleted member.

LCAS Design Considerations
One of the most attractive features of LCAS is the fact that it provides a mechanism to map around VCG member failures by allowing them to be temporarily removed from a VCG without user intervention. Typically, however, paths will be protected in some fashion whether it is 1:N span protections or, more likely unidirectional path switched/subnetwork connection protection (UPSR/SNCP)-type protection within the network. If this is the case, then, on a network failure, it would easily be possible for an Sk to lose a member and signal that condition via the MST field and then regain that member after the So has already initiated temporary removal from the group. Without the ability to allow existing network APS schemes to settle before acting at the LCAS level, this kind of scenario can lead to a considerable amount of thrashing in the reestablishment of data continuity for the VCG after a network failure.

Similarly, while the flexibility of SQ assignment can allow for graceful inclusion or exclusion of VCG members, it can also create significant complication in managing the members. When an So chooses to add multiple members, it must arbitrarily set the SQ values for each member that it wishes to add to something greater than the maximum existing SQ value.

Once an MST=OK is received for any of those members, the So then sets that member’s SQ value to one greater than the highest active member. This means that the all the ‘new’ SQ values of the other members waiting to be added may need to be rearranged.

The RS-Ack is defined specifically so that the Sk can evaluate the new SQ information and acknowledge it before data is placed on the new member, but any table driven alignment scheme based on received SQ values must be tolerant of these changes. Also, software must be able to manage the changes in correlation between Sonet/SDH paths and their SQ values over time.

Additionally, when members are deleted from a VCG, their new SQ values can be any value greater than the highest active member. There is no restriction that these values be unique so many inactive members can share the same SQ. Again, context-switched state machines that run through the SQ values ensuring that all members are processed properly must handle this condition.

Other potential problems can arise from how unused members are handled. If unused paths are received with AIS or unequipped, then the path overhead will contain no virtual concatenation signaling of any kind. There is then no way to determine any kind of virtual concatenation multi-frame alignment of these members. So it must be possible to achieve alignment on the working members of a VCG regardless of the state of all other members.

Moving Processing to Software
As seen above, complications can arise in how different information is interpreted when using LCAS. Due to this complexity and the signaling durations involved (e.g. 64 or 128 ms required to update all MST), it is attractive to move some of the processing to software where more variables can, more easily, be considered. In fact some functions, such as waiting for APS to settle, can be better handled in software.

Care must be taken in establishing the hardware/software partition, however. For example, if a system needs to support hitless addition or deletion of members, the time between the reception of a control packet and when the data multiplexing configuration changes is just 55.6 μs for high-order paths and 250 μs for low-order paths. Software will typically not be able to reconfigure the data multiplexing quickly enough once it has determined what changes are about to occur. It is possible, depending on how many VCGs have changes going on at the same time, that software implementation will not even sort out the changes before they happen.

Wrap Up
Access equipment that supports both high- and low-order mapping allows the service provider to tailor the connectivity granularity and cost based on the requirements of the customers at each installation while only needing to worry about a limited product inventory. With virtual concatenation, the service provider can efficiently provide this appropriate level of connectivity without having to resort to statistical multiplexing techniques that complicate service level agreements (SLAs). With LCAS, bandwidth flexibility and fault tolerance are added.

Designing the systems and components to support virtual-concatenation-enabled Sonet and SDH infrastructures is not trivial, however. Designers have to draw on their experience with legacy equipment and the problems found in the network today to ensure a robust implementation of tomorrow’s network.

.