Tags:
create new tag
, view all tags

Substrate

The Mid-Atlantic Network Facility for Research, Experimentation, and Development (MANFRED) has leveraged the DRAGON network infrastructure by adding server virtualization capabilities and programmable network hardware.

Components

In general, our substrate can be imagined as consisting of the following components:

  • Layer 1 optical transport — Dense Wavelength Division Multiplexing (DWDM) wavelength-selectable switches
    • all-optical switching capabilities in the core
    • GMPLS-enabled control plane for dynamic reconfiguration at the wavelength level

  • Layer 2 Ethernet switches — Statistical Multiplexing (802.3 and 802.1Q)
    • dedicated end-to-end Gigabit and 10 Gigabit Ethernet circuits (and also subrate circuits) for bulk file transfer and real-time applications
    • inter-domain connections to private nationwide layer-2 backbones (such as ProtoGENI and Internet2 DCN)
    • GMPLS-enabled control plane for dynamic reconfiguration at the Ethernet VLAN level
    • standardized Web Services (WS) interface for circuit provisioning

  • Layer 3 IP routers — best-effort routed IP service
    • general Internet/R&E network connectivity for management and control

The substrate does not contain any fiber (white light) switches, such as GlimmerGlass or Calient. We often refer to wavelength-selectable switches (WSS) as reconfigurable optical add/drop multiplexer (ROADM), and use these terms interchangeably.

Block Diagram of Typical Substrate Node

A typical substrate node contains either a three or four-degree wavelength selectable switch (or ROADM) which terminates the dark fiber. These optical switches perform switching at the individual wavelength level and support up to 40 wavelengths per fiber. Most core nodes include an optical add/drop shelf, which contains transponder modules for converting client-side signals (e.g. 1000baseLX, 10GBase-LR) to Dense Wave Division Multiplexing (DWDM) ITU Grid C-Band wavelengths in the 1520-1570nm range.

A 10Gbps-capable Layer-2 Ethernet switch connects to transponders on the optical add/drop shelf, providing multiple GigE and 10GigE circuits to other nodes around the network. Virtualization servers, GMPLS control and network performance verification PCs are connected to the Ethernet switch. These hosts terminate dynamic, end-to-end Layer-2 circuits provisioned across the network.

Finally, a control switch provides management access and general Internet connectivity to the node.

DRAGON typical core node block diagram

Physical Implementation and Backbone Connections

The network contains five core optical switching nodes connected by over 100 miles of dark fiber provided by Qwest, Level3 and FiberGate:

  • University of Maryland — College Park, Maryland
  • University of Southern California Information Sciences Institute (ISI) East — Arlington, Virginia
  • Level3 — McLean, Virginia
  • George Washington University — Washington, DC
  • Qwest — Washington, DC

The diagram below depicts the overall physical topology of the network substrate and connections to nationwide backbones, as currently deployed in the Washington DC metro region:

DRAGON Washington DC metro area footprint

Each of these core switching nodes contains a rack of equipment which resembles the picture below:

DRAGON typical core node physical implementation
photo of UMCP node

Please note that not every node contains all of these resources. For example, the George Washington University (GWU) node only contains a wavelength selectable switch. It does not contain an add/drop shelf, Ethernet switch, or any PCs. This node only provides optical switching and amplification capabilities. Programmable network hardware is only available at select nodes, as discussed in the next section.

Virtualization and Programmable Network Resources

As part of GENI, the network substrate described above has been augmented with virtualization nodes and programmable network hardware.

At this time, five PlanetLab nodes offer virtual machine resources, providing some percentage of CPU/memory/disk from the base system. Two NetFPGA cards installed in rackmount PCs and connected to the GigE ports of the Ethernet switch for that node. Please note that the NetFPGA cards cannot yet be shared by multiple slices simultaneously. These resources are currently only available for one slice at any given time.

The diagram below depicts the locations of the 5 PlanetLab nodes and 2 NetFPGA cards:

Topic revision: r5 - 2014-04-07 - XiYang
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback