1. Introduction

The constraint that decides whether a multi-rail optical line system ships or stays on a slide deck is not photonic. It is the inline amplifier hut. ILA huts along North American terrestrial corridors were built decades ago along roadsides and railroad rights-of-way. They are physically small, fed by a few hundred amperes of −48 V DC, and cooled by HVAC that consumes a large fraction of every available watt before any optical equipment receives power. AI-driven scale-across has pushed required capacity per route from a single fully filled fiber pair to tens of fiber pairs lit in parallel, and the physical envelope of the ILA hut has not changed to accommodate it. Per-channel OSNR margin at the receiver still sets the upper bound on modulation order regardless of how many rails sit in parallel, but rail count now sets the upper bound on per-route capacity.

Three numbers sit at the boundary between what the hut allows and what the optical shelf must deliver. Shelf feeds must remain below 3000 W to fit power-distribution standards inherited from central-office practice. Shelf depth must remain below 300 mm to fit ILA hut racks that were never built to ETSI 600 mm or EIA 1000 mm depth. Shelf height for a high-rail-count line card must remain below 10 RU so that a single hut can host enough rails to justify a route upgrade. These numbers appear consistently in industry technical presentations on multi-rail system challenges and shape the public descriptions of multi-rail line system architectures across the optical equipment industry.

This article connects the three form-factor numbers to the physics they constrain. It examines why pump lasers dominate the EDFA power budget, why the gain medium is the only component that must remain replicated per rail, and why uncooled high-power 980 nm pumps and LCoS-based dynamic gain equalizers became the enabling technologies for sub-linear power scaling. It works through a per-rail vs shared-component matrix, presents the rail-count power scaling that current multi-rail systems claim, and identifies the correlated failure mode that pump sharing introduces. The reader will leave with a clear map between deployment-site physics and the component-level decisions that define a multi-rail amplifier line card.

2. The physical envelope of the terrestrial ILA hut

An ILA hut hosts in-line amplifier sites between fiber spans of roughly 80 km on a long-haul route. It contains no add-drop functionality, no transponders, no client-facing equipment — only the optical amplification and instrumentation needed to keep the wavelengths usable until they reach the next ROADM site. The hut is the smallest, simplest, and most replicated building block of a long-haul network, and its specification was set when a single fiber pair carrying ten 10 Gb/s wavelengths was the operational maximum.

2.1 What the hut actually delivers

A typical brownfield ILA hut along a North American long-haul route holds a handful of equipment racks and a couple hundred amperes of available DC power on the −48 V bus, sized for the original equipment plus modest expansion. Power-distribution standards published by Telcordia in GR-3160 and environmental qualification under GR-63-CORE define the operating envelope: temperature, humidity, vibration, EMI, fire safety, and the maximum dissipated power per rack that the cooling plant can remove. ETSI EN 300 019 covers comparable European deployments. These specifications are stable, not aspirational — equipment that violates them does not deploy, regardless of how attractive the photonic performance is.

2.2 Where the power actually goes

Cooling consumes up to 70% of available site power in older terrestrial huts, a figure widely cited in published descriptions of multi-rail line system architecture. The remaining 30% is the actual budget for optical and electronic equipment. A hut nominally rated at 10 kW of total DC capacity may therefore deliver only 3 kW for amplifier shelves once the HVAC plant is running at design conditions. This is the number the line-system designer must respect, not the rack-plate capacity.

The cooling overhead is structural, not accidental. ILA huts are typically uninsulated or lightly insulated metal enclosures sited in environments with wide ambient temperature swings. They use direct-expansion air conditioning rather than the chilled-water plants that data centers use. The thermal load is dominated by ambient gain through the building envelope plus the heat dissipated by every active component inside. Cutting that heat at the source — through component-level efficiency rather than larger HVAC — is the only path that does not require rebuilding the hut.

ILA Hut Power Budget Cascade Flow diagram showing how an ILA hut's nominal 10 kW DC plant capacity is consumed by HVAC at approximately 70 percent, leaving only 3 kW for equipment, which then derates to a 2400 W usable shelf budget after bus-bar limits and 80 percent steady-state derating. ILA Hut Power Budget — Where 10 kW Becomes 2400 W ILA Site DC Plant — ~10 kW total capacity A few hundred amperes at −48 V — sized per Telcordia GR-3160 distribution standards HVAC Cooling Plant — ~7 kW (70%) Direct-expansion air conditioning, no chilled water Fixed by hut envelope — cannot be reduced without civil works Equipment Budget ~3 kW (30%) Available for optical equipment Per-shelf cascade — what the 3 kW equipment budget actually allows Per-Shelf Bus-Bar Limit 3000 W Telcordia distribution / breaker rating Practical Usable Power 2400 W After 80% derating for inrush + fans Multi-Rail Card Envelope All rails + pumps + DGE + OCM + OTDR must fit inside 2400 W — drives every component-level decision HVAC and Equipment bar widths are proportional to the 70 / 30 power split. Equipment budget supports approximately one fully populated shelf per site.
Figure 1: ILA hut power budget cascade. A nominal 10 kW DC plant gives up roughly 70% to HVAC before the first watt reaches optical equipment. The remaining 3 kW supports approximately one fully populated shelf per site, derated to 2400 W usable. Every multi-rail line card design decision — pump sharing, TEC removal, LCoS-based DGE pooling — exists to fit more rails into this 2400 W envelope.

2.3 Why ETSI 600 mm and EIA 1000 mm depths do not help

Modern central offices and data centers accept rack depths of 600 mm to 1000 mm, allowing equipment shelves to extend deep into the rack with extensive cabling and cooling clearance. ILA huts predate this convention. Many use 300 mm-deep racks designed for legacy SDH and early DWDM equipment, with no path to expansion without removing walls. Multi-rail amplifier shelves must therefore design to the 300 mm depth limit even though every other deployment site allows more, because the hut population on any installed long-haul route is dominated by the legacy form factor.

Takeaway: An ILA hut is a fixed envelope of a few hundred DC amperes minus 70% cooling overhead, a small number of legacy-depth racks, and a thermal plant sized for 10 Gb/s-era equipment. Multi-rail system design starts from this envelope and works backwards to the shelf, the line card, and finally the components.

3. From site constraint to shelf constraint

The three shelf-level numbers — sub-3000 W, sub-300 mm, sub-10 RU — translate the hut envelope into specific equipment design rules. None of them is a vendor preference. Each one corresponds to a specific physical limit of the deployment site and propagates downward through the design.

3.1 Sub-3000 W shelf feed

Power distribution within a hut uses bus bars sized for the original equipment loads. The 3000 W ceiling per shelf reflects the maximum continuous draw that legacy −48 V circuit breakers, fuse panels, and bus bars can deliver to a single equipment slot without breaker derating. Pushing past 3000 W means re-running power infrastructure, which means truck rolls, planned outages, and capital expense per hut multiplied across hundreds or thousands of sites on a network. For an operator with 500 ILA sites on a transcontinental route, a shelf design that requires hut electrical upgrades is functionally non-deployable.

The 3000 W ceiling is a hard constraint at the shelf level, but the actual operating budget is lower. Headroom must remain for inrush current at startup, cooling fan power inside the shelf, and the power conversion losses between the −48 V input and the optical components. Steady-state power draw per shelf is typically planned at 80% of the rated maximum — about 2400 W of usable optical and electronic load.

3.2 Sub-300 mm depth and sub-10 RU height

Sub-300 mm depth is set by legacy rack frames in the hut. Sub-10 RU height is set by the requirement to host multiple shelves per rack while leaving cable management and airflow clearance. A single rack with 42 RU of usable height can host four shelves at 10 RU each plus 2 RU for power-distribution and fiber routing — or three shelves at 12 RU each, which forces the operator to either sacrifice a fourth shelf or run two ILA sites in parallel along the same route. Sub-10 RU is the threshold that keeps four shelves per rack achievable.

The sub-10 RU target also drives front-panel real estate. Each rail requires fiber connections for the in and out directions of both the amplifier and any monitoring port. A two-rail shelf needs four fiber connections; an eight-rail shelf needs sixteen. Front-panel density at sub-10 RU only works with high-density connectors and pre-terminated patch cords — an MPO-style approach borrowed from data-center practice — because individual LC connectors at this scale would consume the panel before all rails were terminated.

ILA Hut to Multi-Rail Shelf Constraint Cascade Diagram showing how site-level constraints (DC power, cooling overhead, rack depth) cascade into shelf-level constraints (3000 W feed, 300 mm depth, 10 RU height) and from there into component-level decisions (shared pumps, LCoS DGE arrays). Layer 1 — ILA Hut Site Constraints DC Power Plant A few hundred amperes −48 V bus, GR-3160 sized ~10 kW total capacity Cooling Plant DX air conditioning Direct expansion, no chilled water Consumes up to 70% of total power Legacy Racks Pre-ETSI 600 mm convention Telcordia GR-63-CORE qualified ~300 mm depth Layer 2 — Multi-Rail Shelf Constraints Sub-3000 W Shelf Feed Bus-bar limit per slot ~2400 W usable @ 80% Power-distribution constraint Sub-300 mm Depth Legacy rack frames No expansion path Mechanical constraint Sub-10 RU Height 4 shelves per 42 RU rack Front-panel real estate Density constraint Layer 3 — Component-Level Design Decisions Power-Driven • Uncooled 980 nm pumps • TEC removal • Multi-chip 14XX nm efficiency • Photonic integration Target: sub-linear scaling Space-Driven • Dual-chip pump packages • Arrayed LCoS DGE modules • Quad-WSS modules • Multi-port shared OCM Target: 32× rail density Architecture-Driven • Per-rail EDF gain medium • Shared pump infrastructure • Switched metrology (OTDR/OSA) • Streaming telemetry SDN Target: 75% power reduction
Figure 2: Constraint cascade from ILA hut envelope to component-level design. Site-level limits on DC power, cooling overhead, and rack depth propagate into shelf-level form-factor numbers (sub-3000 W, sub-300 mm, sub-10 RU), which then dictate which components must be shared and which must remain per-rail.

The cascade in Figure 2 explains why a single number — say, the sub-3000 W shelf feed — is not interesting in isolation. It becomes interesting when read alongside the cooling overhead, the rack depth, and the rail count target. Together, these numbers force the line-card architect to find power and space at the component level, because there is none at the system level.

4. Sub-linear power scaling at the component level

Inside a conventional inline EDFA, approximately 80% of the electrical power dissipates in pump-photon generation — pump laser drive current and any associated thermo-electric cooling. The remaining 20% covers control electronics, optical channel monitoring, dynamic gain equalization, OTDR and OSA functions, and small-signal handling. This ratio is broadly consistent across vendor implementations and follows directly from the wall-plug efficiency of available 980 nm and 14XX nm pump diodes. The architectural fundamentals — pump absorption, stimulated emission, and erbium population dynamics — are covered in detail in the MapYourTech treatment of basics of EDFA technology. For background on pump-source choices, see the comparison of 980 nm and 1480 nm pump-based EDFAs.

4.1 Why pump dominance forces the architecture

If pump generation drives 80% of EDFA power, then linearly replicating an EDFA per rail produces almost-linear total system power. A four-rail shelf would draw roughly four times the power of a single-rail shelf. With a 2400 W usable shelf budget and a 600 W single-rail draw, the operator gets four rails — and runs out of shelf-power budget before reaching the eight or sixteen rails per node that AI scale-across deployments require. Linear scaling does not fit the envelope.

The architecture that does fit shares the pump infrastructure across rails. A single uncooled high-power 980 nm pump diode delivers roughly 1 W to 1.4 W of optical power in industry-typical packages — enough to pump multiple parallel erbium-doped fiber gain media simultaneously through a fan-out coupler, provided the per-rail signal power and gain target sit within the saturation envelope of the shared pump. Multi-chip packages combining two pump dies in a single 10-pin or 3-pin housing further reduce footprint, replacing what would otherwise be two physical pump assemblies per rail with one assembly serving two or more rails.

4.2 Removing the TEC

Cooled pump diodes use a thermo-electric cooler (TEC) to hold the laser at a target case temperature, typically 25 °C, regardless of ambient. The TEC stabilizes wavelength and threshold current at the cost of significant electrical drive — a TEC pulling several watts of heat against a 50 °C ambient can dissipate as much electrical power as the pump diode itself. In a hut where cooling already consumes 70% of total power, every TEC is a double tax: it consumes electrical power directly, and it puts heat back into the air that the HVAC plant must remove again.

Modern uncooled high-power 980 nm pump diodes operate without a TEC. Wall-plug efficiency stays roughly flat across the 0–70 °C operating window, while cooled pumps see exponential power growth above 50 °C as the TEC works harder. The crossover point — beyond which uncooled wins by a wide margin — sits in the 40–50 °C ambient range, well within the operating envelope of older ILA huts during summer peaks. Removing TECs is therefore not a marginal optimization; it is the only path that keeps total dissipation flat across the operating temperature range that an actual hut sees.

Combining shared pumps, TEC removal, and pooled gain equalization, multi-rail systems achieve sub-linear total-power scaling against rail count. A four-rail card draws less than three times the power of a single-rail card, not four times. An eight-rail card draws less than five times. The shape of the curve — rather than any single point on it — is what makes the architecture deployable inside a fixed 2400 W shelf budget.

Rack-Level Density Comparison: Single-Rail vs Multi-Rail Side-by-side comparison of two 42 RU rack frames. The legacy rack hosts four single-rail amplifier shelves, one fiber pair per shelf. The multi-rail rack hosts four multi-rail line cards in the same envelope, each with eight rails, for an 8 times rail density improvement at roughly 75 percent lower per-rail power. Rack-Level Density: Same 42 RU Envelope, Different Architectures Legacy Single-Rail Rack Each fiber pair gets its own shelf Single-Rail Amplifier Shelf 1 fiber pair · own pumps · own DGE own OCM · own OTDR · own controller ~600 W Single-Rail Amplifier Shelf 1 fiber pair · own pumps · own DGE own OCM · own OTDR · own controller ~600 W Single-Rail Amplifier Shelf 1 fiber pair · own pumps · own DGE own OCM · own OTDR · own controller ~600 W Single-Rail Amplifier Shelf 1 fiber pair · own pumps · own DGE own OCM · own OTDR · own controller ~600 W 4 rails per rack · ~2400 W total Multi-Rail Rack One line card hosts multiple fiber pairs Multi-Rail Line Card Shared pumps · LCoS DGE array · switched OCM 8 rails · ~1200 W (~150 W per rail) Multi-Rail Line Card Shared pumps · LCoS DGE array · switched OCM 8 rails · ~1200 W (~150 W per rail) Multi-Rail Line Card Shared pumps · LCoS DGE array · switched OCM 8 rails · ~1200 W (~150 W per rail) Multi-Rail Line Card Shared pumps · LCoS DGE array · switched OCM 8 rails · ~1200 W (~150 W per rail) 32 rails per rack · ~4800 W total Rail Density 4 rails → 32 rails per rack Per-Rail Power −75% ~600 W → ~150 W per rail Same Hut Envelope Same 42 RU rack Same cooling plant Same DC power feed Caveat — Industry Claims Density and power figures here align with current-generation multi-rail line system designs — not standardized benchmarks. Verify per deployment.
Figure 3: Rack-level density before and after multi-rail. The legacy rack supports four fiber pairs at roughly 600 W each. The multi-rail rack — same physical envelope, same cooling plant — supports thirty-two fiber pairs at roughly 150 W each. The 8× density and 75% per-rail power reduction figures are illustrative and align with publicly described characteristics of current-generation multi-rail line systems; actual values depend on specific deployment conditions.

4.3 The rail-count power curve

Figure 4: Total shelf power vs rail count for two architectures. Linear scaling — replicating a per-rail amplifier — exhausts the 2400 W usable shelf budget at four rails. Sub-linear scaling, achieved by sharing pumps, TECs, and metrology across rails, fits eight rails into the same envelope. Curve shapes are illustrative and reflect industry-published claims of approximately 75% power reduction per rail at high rail counts.

The 32× density and 75% power-reduction figures publicly cited for current-generation multi-rail line systems are consistent with this curve shape, applied at the rail counts (8, 16, 32) those products target. These numbers are vendor marketing claims, not standardized benchmarks. Actual achievable density depends on the specific optical conditions of the deployment — span loss, modulation format, OSNR margin, and whether C-band only or C+L is in use.

5. The per-rail / shared-component boundary

Multi-rail architecture works because most of an EDFA's bill of materials can be shared, but not all of it. The boundary between shared and per-rail components is set by physics, not by cost or vendor preference. Light from one fiber pair cannot be amplified by a gain medium already amplifying light from a different fiber pair — they would interfere in the same erbium population. The gain medium therefore stays per-rail. Everything else — the energy that pumps that gain medium, the equalizer that flattens its output, the monitor that measures the result — is candidate for sharing.

5.1 What stays per-rail

The erbium-doped fiber itself must remain per-rail because it physically carries that rail's optical signals. The same applies to the input and output isolators that prevent backward-propagating ASE from corrupting the upstream span and to the variable optical attenuators on each rail's input that set per-channel power. These are the components that touch the rail's signal path directly. Their replication scales linearly with rail count, but their individual power draw is small — a few hundred milliwatts of control electronics per VOA, near zero for passive isolators and the EDF itself.

5.2 What can be shared

Pump lasers, dynamic gain equalizers, optical channel monitors, OTDR and OSA functions are all architecturally independent of which fiber pair they serve at any given instant. A pump diode delivers photons; the photons are routed through a coupler to whichever EDF needs them. A dynamic gain equalizer based on liquid crystal on silicon (LCoS) technology can be partitioned into multiple independent attenuation profiles within a single physical device. An OCM connected to a 16-port optical switch can sweep through every rail's input and output ports in turn, sampling each at sub-second intervals — fast enough for streaming telemetry but not requiring 16 separate spectrum analyzers.

Multi-Rail Shelf Internal Architecture Block diagram of a multi-rail line card showing three rails sharing a pre-amplifier pump pool, a boost-amplifier pump pool, an LCoS dynamic gain equalizer array, and a switched optical channel monitor with OTDR and OSA functions. Per-rail components — input VOA and isolator, EDF gain media, output isolator — are shown in orange. Shared components — pumps, DGE, monitoring — are shown in blue, purple, and green respectively. Multi-Rail Shelf Internal Architecture Three rails sharing pump infrastructure, gain equalization, and metrology — all inside one 2400 W shelf envelope Shared Pump Pool A — Pre-amp Pumps Dual-chip uncooled 980 nm laser packages Photons fan out via coupler to all 3 rails' EDF Stage 1 Rail 1 Fiber pair 1 In 1 VOA + ISO EDF Stage 1 Per-rail gain medium DGE tap EDF Stage 2 Per-rail gain medium ISO Out 1 Rail 2 Fiber pair 2 In 2 VOA + ISO EDF Stage 1 Per-rail gain medium DGE tap EDF Stage 2 Per-rail gain medium ISO Out 2 Rail 3 Fiber pair 3 In 3 VOA + ISO EDF Stage 1 Per-rail gain medium DGE tap EDF Stage 2 Per-rail gain medium ISO Out 3 Shared LCoS DGE 3 partitions, one per rail Wavelength-by-wavelength shaping Shared Pump Pool B — Boost Pumps Dual-chip uncooled 980 nm laser packages Photons fan out via coupler to all 3 rails' EDF Stage 2 Shared Optical Channel Monitor 16-port optical switch with sub-second sweep Taps every rail's input and output port Streaming telemetry to SDN controller One spectrum analyzer instead of N (also tapped from each input port — not drawn) Diagnostic functions tolerate sequential rail access Shared OTDR / OSA Switched access from same metrology fabric Fault localization and full spectrum analysis on any rail in turn, on operator demand Sequential access tolerable for diagnostics Per-rail EDF · VOA · ISO (replicate per fiber pair) Shared pump distribution Pump pools fan out to all rails' EDFs Shared LCoS DGE partition One LCoS device, N independent profiles Shared metrology tap Switched OCM / OTDR / OSA across rails
Figure 5: Multi-rail shelf internal architecture. Three rails carry their own signals through dedicated EDF gain media, VOAs, and isolators (orange). Pump photons originate in two shared dual-chip 980 nm pump pools (blue) and fan out via couplers to every rail's EDF stage. Gain equalization comes from a single LCoS device (purple) partitioned into per-rail profiles. Optical monitoring, OTDR, and OSA functions share a switched metrology fabric (green) sweeping every rail's input and output ports. Only the components that physically carry rail-specific signals stay per-rail; everything else pools.
Table 1: Per-rail vs shared components in a multi-rail amplifier card
Component Function Replication Why
Erbium-doped fiber Active gain medium Per rail Physically carries the rail's signal; cannot be shared without routing the light
Input/output isolators Reverse-light blocking Per rail Protects each rail's signal path from backward ASE
Variable optical attenuators Per-rail input power control Per rail Sets independent operating point per fiber pair
980 nm / 14XX nm pump diodes Pump photon generation Shared via dual/multi-chip packages Photons are fungible across EDFs through fan-out couplers
Dynamic gain equalizer Gain ripple flattening Shared via LCoS array One LCoS device implements multiple independent attenuation profiles
Optical channel monitor Per-channel power measurement Shared via fast optical switch Sub-second sweep across rail ports meets streaming telemetry requirements
OTDR / OSA Fault localization, spectrum analysis Shared via switched metrology Diagnostic functions tolerate sequential access across rails
Control / management plane Telemetry, SDN northbound Shared (single line card controller) Software function, naturally pooled

5.3 The correlated failure mode

Sharing introduces a failure mode that per-rail architectures do not have. A single pump laser failure degrades every rail sharing that device simultaneously. A single LCoS DGE failure removes gain equalization on all rails its array serves. Streaming telemetry can detect the failure in under a second, but the affected rails are all already off-spec at the moment of detection. This is a correlated failure — the kind that protection switching and reliability engineering treat differently from independent per-rail failures.

Vendor mitigations include redundant pump pairs with automatic switchover, dual-fed gain stages where two pump assemblies share the load, and software-driven graceful degradation that drops modulation order on affected channels rather than losing them entirely. Telecom MTBF targets — typically 10⁶ to 2×10⁵ hours per assembly, depending on operating temperature — must be met at the assembly level for the shared pumps because their failure has higher blast radius than a single-rail pump failure would. The architectural limit on rail count per shared pump pair is not technical; it is the risk an operator is willing to share across parallel paths.

6. Deployment patterns and architectural approaches

Multi-rail line systems entered commercial deployment between 2024 and 2026, driven primarily by hyperscaler demand for fiber capacity between AI training campuses. Two architectural patterns dominate the field and have been shown in live network trials.

6.1 The integrated multi-rail line card

The dominant architectural pattern places multiple rails on a single line card, with shared pumps, DGEs, and metrology pooled within the card boundary. Publicly described systems following this pattern cite up to 32× rail density improvement and 75% power reduction relative to single-rail predecessors, with rail counts of 8, 16, or 32 per card depending on the design point. Some implementations support C+L band operation on the same card to double per-fiber capacity. The integrated approach prioritizes density, automation, and fast turn-up of hundreds of fiber pairs at a single site — characteristics that align with scale-across DCI and long-haul AI campus interconnect requirements. These architectures fit within the broader picture of open line systems and multi-vendor coherent wavelengths.

6.1.1 Where the integrated approach wins

Greenfield deployments along new dark fiber routes — where the hut envelope can be designed around the line card rather than the other way around — favor the integrated multi-rail card. Rail count per chassis is high (16 to 32 typical), per-rail power is at the bottom of the curve, and the operator gets a single fault domain per fiber-pair group. The trade-off is the correlated failure mode: a single shared component takes multiple rails offline simultaneously.

6.1.2 Where the integrated approach struggles

Brownfield ILA huts with mixed equipment generations and tight site lease terms find integrated multi-rail cards harder to justify. The card's rail count may exceed the number of fiber pairs actually transiting that site. Stranded rails consume baseline power for shared infrastructure they do not amplify. A modular approach — smaller rail counts per shelf, multiple shelves per site — sometimes maps better to the actual fiber routing.

6.2 Brownfield retrofit constraints

Deploying a multi-rail card into an existing hut faces brownfield realities that greenfield AI campus deployments do not. The hut's HVAC plant was sized for the original equipment. Adding rails — even with 75% power reduction per rail — increases total dissipation if rail count grows faster than per-rail efficiency improves. An eight-rail card consuming 2 kW dissipates the same heat whether the per-rail efficiency was 75% better than legacy or not. The hut must accept the absolute thermal load, not the relative improvement.

Operators with hundreds of brownfield ILA sites typically stage deployment: validate the multi-rail card in selected huts that have headroom, monitor inlet/outlet temperatures over a full seasonal cycle, then scale across the route. The largest barrier is rarely the optical engineering. It is the operations process — confirming each hut individually, scheduling truck rolls for HVAC inspection, and renegotiating site lease terms where power draw exceeds the original specification.

6.3 Greenfield AI campus deployments

Greenfield AI scale-across deployments — new dark fiber routes between AI training campuses, often along power-corridor or pipeline rights-of-way — design huts from scratch for multi-rail equipment. These huts target lower cooling overhead (closer to 30–40% rather than 70%), higher per-rack power density (10 kW or more), and ETSI-compatible rack depths. The shelf-level form factor still drives the architecture, but the greenfield envelope provides more headroom — allowing higher rail counts per shelf and earlier introduction of C+L band operation without requiring custom hut designs at every site.

The architectural test for any multi-rail design is whether it deploys in a brownfield hut without civil works. A card that requires power upgrades, HVAC changes, or rack replacements is not solving the deployment problem — it is moving it from the photonics team to the operations team. The 32× density and 75% power-reduction claims become meaningful only when validated in the worst available hut on a route, not in a benchtop lab.

7. Future outlook

The constraint envelope of the ILA hut is not going to relax on the timescale of equipment refresh cycles. Building a new hut takes years; equipping an existing route with multi-rail systems takes months. The next round of architectural improvement therefore has to come from the same direction the current generation came from: more shared infrastructure per line card, higher pump efficiency at the same operating temperatures, and tighter integration of metrology functions.

Photonic integration is the most likely next step. Combining EDF gain stages, DGE elements, and OCM functions onto a single photonic integrated circuit per rail bank would reduce per-rail volume and per-rail power by another factor of two relative to the current discrete-component approach. Standards work in OIF and IEEE on disaggregated multi-rail line system interfaces is at an early stage but is converging on a common control plane that would let operators mix multi-rail line cards from multiple vendors on the same fiber route — extending the open line system model into the multi-rail era. Combination of multi-rail with C+L spectrum expansion, covered in the MapYourTech overview of C+L band DWDM systems, doubles per-fiber capacity on top of the rail-count multiplier.

The longer-term direction — once the brownfield ILA hut population has been refreshed — points toward cooling redesign at the hut level rather than further shaving at the shelf level. Reducing the 70% cooling overhead to 30–40%, comparable to modern data-center practice, would give every shelf in every hut a 2× power budget without any change at the optical line card. That investment sits on a different timescale and a different capital cycle than the line system itself, but it sets the upper bound on how far the current architecture can scale before something else has to change.

References

  1. Telcordia Technologies, GR-3160 — NEBS Requirements for Telecommunications Data Center Equipment and Spaces.
  2. Telcordia Technologies, GR-63-CORE — NEBS Requirements: Physical Protection.
  3. ETSI EN 300 019 — Environmental Engineering (EE); Environmental conditions and environmental tests for telecommunications equipment.
  4. ITU-T G.652 — Characteristics of a single-mode optical fibre and cable, ITU-T Study Group 15.
  5. ITU-T G.694.1 — Spectral grids for WDM applications: DWDM frequency grid, ITU-T Study Group 15.
  6. Ciena Insights, "What is hyper-rail or multi-rail?" — public blog post on multi-rail architecture, density and power claims.
  7. Cisco SP360 Blog, "Optical innovations deliver resilient, scalable, efficient AI networking" — public blog post on multi-rail open line system architecture.
  8. Coherent Corp., ECOC Market Focus presentation on multi-rail amplifier component design — public conference materials.
  9. P. Poggiolini et al., "The GN-model of fiber non-linear propagation and its applications," Journal of Lightwave Technology.
  10. E. Desurvire, "Erbium-Doped Fiber Amplifiers: Principles and Applications," Wiley-Interscience.
  11. OIF Implementation Agreement — Coherent Driver Modulator and Integrated Coherent Transmit-Receive Optical Sub-Assembly specifications, Optical Internetworking Forum.
Sanjay Yadav, "Optical Network Communications: An Engineer's Perspective" – Bridge the Gap Between Theory and Practice in Optical Networking.