Emblem Sub Level Top
Emblem Sub Level Logo StarLight Supported International/National/Regional/Local Experimental Research Testbed Networks
Emblem Sub Level Bottom
  • 400/800 Gbps/1.6 Tbps WAN Services Testbeds
    Data production among science research collaborations continues to accelerate, a long term trend that in part is propelled by large scale science instrumentation, including high luminosity research instruments. Consequently, the networking community is preparing for service paths beyond 100 Gbps, including 400 Gbps, 800 Gbps and 1.6 Tbps WAN and LAN services. In this progression, 400 Gbps E2E WAN services are a key building block. Consequently, the requirements and implications of 400 Gbps WAN services are being explored at scale by StarLight and its research partners, including 400/800 Gbps and 1.6 Tbps Gbps E2E on customized testbeds over 10’s of thousands of miles.
  • AIDTN: Towards a Real-Time AI Optimized DTN System With NVMeoF Testbed
    This testbed was designed to develop AI techniques for optimizing large scale long distance high performance data flows using Data Transfer Nodes (TDNs) for data intensive science.
  • AutoGOLE/NSI/MEICAN: Dynamic Global L2 Provisioning
    StarLight is a founding member of the AutoGOLE worldwide collaboration of Open eXchange Points and research and education networks developing automated end-to-end network services, e.g., supporting connection requests through the Network Service Interface Connection Service (NSI-CS), including dynamic multi-domain L2 provisioning. StarLight also participates in a related initiative, the Software-defined network for End-to-end Networked Science at Exascale (SENSE) system, which provides the mechanisms to integrate resources beyond the network, such as compute, storage, and Data Transfer Nodes (DTNs) into this automated provisioning environment. NSI has been augmented by the MEICAN provisioning tools created by RNP, the Brazilian national R&E network. These initiatives use the global AutoGOLE experimental testbed, which has sites in North and South America, Asia Pacific, and Europe.
  • Chameleon Cloud Testbed
    StarLight is a founding member of the NSF Chameleon Cloud Testbed, a large-scale, deeply reconfigurable experimental platform built to support Computer Sciences systems research. Community projects range from systems research developing new operating systems, virtualization methods, performance variability studies, and power management research to projects in software-defined networking, artificial intelligence, and resource management. To support experiments of this type, Chameleon supports a bare metal reconfiguration system, giving users complete control of the software stack, including root privileges, kernel customization, and console access. While most testbed resources are configured this way, a small amount is configured as a virtualized KVM cloud to balance the need for finer-grained resource sharing sufficient for some projects with coarse-grained and stronger isolation properties of bare metal. StarLight is collaborating with the Chameleon and FABRIC communities to integrate the capabilities of both testbeds.
  • CienaOPn Research On Demand Network Testbed
    This testbed, North/South and East West - 400 Gbps between StarWave and Ciena Research Lab in Ottawa via CANARIE, and 1.2 Tbps between StarLight and the Joint Big Data Testbed in McLean Virginia is developing prototype 1.2 Tbps WAN services for data intensive sciences.
  • Cisco ICN Testbed (Content-Centric Networking)
    This testbed is investigating techniques for content centric networking.
  • Cisco Coherent Optic WAN Testbed
    This testbed is investigating capabilities for coherent optics for supporting high capcity WAN services for data intensive sciences.
  • DTN-as-a-Service Testbed
    For several years, the StarLight consortium has been developing high performance DTN-as-a-Service initiatives to prototype network analytic services for up to 1.2 Tbps WAN end to end infrastructure. DTN-as-a-Service focuses on transporting large data across WANs and LANs within cloud environments, including using orchestrators such as Kubernetes, to improve the performance of the data transport over the high-performance networks. These experiments are being conducted on 400 Gbps, 800 Gbps, and Tbps testbeds. The experiments are demonstrating the implementation of cloud-native services for data transport within and among Kubernetes clouds through the DTN-as-a-Service framework, which sets up, optimizes, and monitors the underlying system and network. DTN-as-a-Service streamlines big data movement workflow by providing a Jupyter controller, a popular data science tools, to identify, examine and tune the underlying DTNs for high-performance data movement in Kubernetes and enabling data transport over long-distance WAN networks using different networking fabrics. For several years, DTNaaS was implemented as an XNET resource for the annual IEEE/ACM International Conference on High Performance Computing, Networking, Storage, and Analytics.
  • Elastic Data Transfer Enabled by Programmable Data Planes Testbed (SciStream)
    The StarLight Facility is supporting Argonne National Laboratory in research projects investigating new techniques for elastic data transfer infrastructure using an architecture that expands and shrinks data transfer node resources based on demand. Data plane programmability has emerged as a response to the lack of flexibility in networking ASICs and the long product cycles required to introduce new protocols on their networking equipment. This approach bridges the gap between the SDN model potentials and actual OpenFlow implementations. Following the ASIC tradition, OpenFlow implementations have focused on defining matching protocol header fields in forwarding tables, which cannot be modified once the switch is manufactured. In contrast, programmable data planes allow network programmers to define precisely how packets are processed in a reconfigurable switch chip (or in a virtual software switch). Such levels of programmability provide opportunities for offloading specific processing on the data to the network and obtaining a more accurate state of the network. One key element of the elastic DTI architecture is the statistics collector that feeds usage and performance information to a decision engine. Another key element of the elastic DTI architecture is the load balancer that distributes the load of incoming transfers among existing virtualized resources. State-of-the-art solutions rely on traditional network monitoring systems such as SNMP and sFlow to collect network state information. However, traditional network monitoring methods either use a polling mechanism to query network devices or use sampling when devices are allowed to push data to lower the communication overhead and save database storage space. In-band Network Telemetry (INT) is a framework that allows the data plane to add telemetry metadata to each packet of a flow. Then, the metadata is removed and sent to a collector/analyzer before the packet is forwarded to the final destination. This initiative undertook experiments to show the impact of advanced network telemetry using programmable switches and the P4 (Programming Protocol-independent Packet Processors) language on the granularity of network monitoring measurements. The detection gap between a programmable data plane approach and traditional methods such as a flow was compared.
  • ESnet 400 Gbps National Testbed
    The Energy Science Network (ESnet) provides services for major large-scale science research, lab facilities, and supercomputing centers. To provide an experimental research testbed for science networking innovations, ESnet has designed and implemented a 400 Gbps testbed that connects its lab in Berkeley, California, to the StarLight facility.
  • FABRIC: National/International U.S. Computer Science Testbed
    StarLight and the Metropolitan Research and Education Network (MREN) provide support for the NSF-funded FABRIC testbed (FABRIC is Adaptive ProgrammaBle Research Infrastructure for Computer Science and Science Applications), which is an International infrastructure that enables cutting-edge experimentation and research at-scale in the areas of networking, cybersecurity, distributed computing, storage, virtual reality, 5G, IoT, machine learning, and science applications.
    The FABRIC testbed supports experimentation on new Internet architectures, protocols, and distributed applications using a blend of resources from FABRIC, its facility partners, and their connected campuses and opt-in sites. FABRIC is an everywhere-programmable network combining core and edge components and interconnecting to many external facilities. FABRIC is a multi-user facility supporting concurrent experiments of differing scales facilitated through federated authentication/authorization systems with allocation controls. The FABRIC infrastructure is a distributed set of equipment at commercial collocation spaces, national labs, and campuses.
    Each 29 FABRIC site has large amounts of compute and storage, interconnected by high-speed, dedicated optical links. It also connects to specialized testbeds (e.g., PAWR, NSF Clouds), the Internet, and high-performance computing facilities to create a rich environment for a wide variety of experimental activities. At StarLight, FABRIC supports 1.2 Tbps capacity from the east coast and 1.2 Tpbs of capacity to the west coast. One project StarLight and its research partners are addressing is integrating FABRIC and Chameleon. FABRIC has been designed to be extensible, continually connecting to new facilities, including clouds, networks, other testbeds, computing facilities, and scientific instrument.
  • FAB (FABRIC ACROSS Borders
    The NSF FABRIC Across Borders (FAB) companion initiative is an extension of the FABRIC testbed connecting the core North America infrastructure to four nodes in Asia, Europe, and South America. By creating the networks needed to move vast amounts of data across oceans and time zones seamlessly and securely, the project enables international collaboration to accelerate scientific discovery.
  • iCAIR International P4 Testbed
  • GEANT Global P4 Testbed
    StarLight supports multiple research projects using the P4 network programming language (“Protocol Independent, Target Independent, Field Reconfigurable”), enabling many new capabilities for programmable networks, including capabilities supporting data-intensive science services.
    Particularly important P4 capabilities are in-band telemetry (INT), which enables high-fidelity network flow visibility. To develop the capabilities of P4, an international consortium of network research institutions collaborated in designing, implementing, and operating two International P4 testbeds. One is an international P4 testbed (Global P4Lab) designed and implemented by a global consortium led by GEANT, the international network for the European R&E national networks.
    Another global P4 testbed was designed by the International Center for Advanced Internet Research at Northwestern University. This testbed provides a highly distributed network research and development environment to support advanced empirical experiments at a global scale, including on 400 Gbps paths. The implementation includes access to the P4Runtime implementation.
  • Illinois Express Quantum Communications and Networking Testbed (IEQNET)
    Using currently available technology, the Illinois Express Quantum Network (IEQNET) Testbed was designed, and support programs were implemented to realize metropolitan-scale quantum networking over deployed optical fiber. The IEQNET consists of multiple sites that are geographically dispersed in the Chicago metropolitan area, including Northwestern’s campus in Evanston, the StarLight Communications Exchange Facility on Northwestern’s Chicago campus, a carrier exchange in central Chicago, Argonne National Laboratory and Fermi National Accelerator Laboratory. Each site has quantum nodes (Q-Nodes) that generate or measure quantum signals, primarily entangled photons.
    Measurement results are communicated using standard classical signals, communication channels, and conventional networking techniques such as Software-defined networking (SDN). The entangled photons in IEQNET nodes are generated at multiple wavelengths and are selectively distributed using transparent optical switches. At the OFC conference in March 2023, the Illinois Express Quantum Network consortium, led by NuCrypt and Northwestern’s Center for Photonic Computing and Communications, demonstrated the distribution and measurement of quantum entangled signals over fiber with co-propagating classical data. Distributed measurements were collected and controlled from a single location using an embedded optical data link. An optical switch was programmed to send different quantum entangled wavelengths to spatially separated points. Demonstrating coordinated control of quantum photonic instruments at multiple sites highlighted the capability for robust operation of commercially available quantum optical equipment over fiber-optic infrastructure.
  • International Global Environment for Network Innovations (iGENI) SDN/OpenFlow and Multi-Services Exchange SDX Testbed
    The International Global Environment for Network Innovations (iGENI) SDN/OpenFlow and Multi-Services Exchange SDX Testbed is being used to explore new techniques for large scale world-wide programmable networking.
  • International ExaTrans Testbed: Exascale Science 400 Gbps Testbed
    With its research partners, the StarLight Consortium established the International ExaTrans/NVMe-Over-Fabrics As A Microservice testbed as a platform to creating services based on an integrated SDN/SDX/DTN design using 400 Gbps DTNs for WANs, including transoceanic WANs, to provide high-performance transport services for exascale science, controlled using SDN techniques. These SDN-enabled DTN services are being designed specifically to optimize capabilities that support large-scale, high-capacity, high-performance, reliable, high-quality, sustained individual data streams for science research, including over thousands of miles over multi-domain networks. One key service supports high-performance transfers of extremely large files (e.g, exabytes) and extremely large collections of small files (e.g., many millions of files). The integration of these services with DTN-based services using SDN has been designed to ensure E2E high performance for those streams and to support highly reliable services for long duration data flows. Resolving this issue requires addressing and optimizing multiple components in an E2E path, processing pipelines, high-performance protocols, kernel tuning, OS bypass, path architecture, buffers, memory used for transport, capabilities for slicing resources across the exchange to segment different science communities while using a common infrastructure, and many other individual components. As part of this initiative, StarLight established the ExaTrans with NVMe-over-Fabrics as Microservice Testbed to support research projects directed at improving large-scale WAN microservices for streaming and transferring large data among high-performance Data Transfer Nodes (DTNs). Building on earlier initiatives, this initiative is designing, implementing, and experimenting with NVMe-over-Fabrics on 400/800 Gbps Data Transfer Nodes (DTNs) over large-scale, long-distance networks with direct NVMe-to-NVMe service over RoCE and TCP fabrics using SmartNICs. The NVMe-over-Fabrics Microservice connects remote NVMe devices without userspace applications, thereby reducing overhead in high-performance transfer and offloading NVMe-over-Fabrics initiators software stack in SmartNICs. A primary advantage of NVMe-over-Fabrics Microservice is that it can be deployed in multiple DTNs as a container with lower overhead.
  • Joint Big Data Testbed StarLight ←→ McLean Virginia
    The Joint Big Data Testbed (JBDT) was designed and implemented as a collaboration of the NASA Goddard Space Flight Center and the Naval Research Lab in Washington DC to explore extremely large data transfers over thousands of miles of WANs. The core of the testbed is based on 1.2 Tbps paths among core nodes in McLean, Virginia, the StarLight facility in Chicago, and 400 Gbps paths between the ESnet testbed site between StarLight and Berkeley, California. Each year, the JBDT extends its paths and capabilities with support from SCinet to showcase experiments and demonstrations at the IEEE/ACM International Conference on High Performance Computing, Networking, Storage, and Analytics.
  • KREONET SD-WAN Testbed (KREONET-S)
    This KREONET testbed is developing techniques for using Software Defined Networking (SDN) for global WAN services for globally data intensive science. KREONET SD WAN Testbed is Open Source (Vendor Neutral) High Scalability Central Control Virtualization wide area network (WAN) testbed. The testbed has been implemented from South Korea to StarLight on the KREONET network.
  • LHCONE P2P Prototype Service International Dynamic Multi-Domain L2 Service for HEP
    The LHCONE P2P (Large Hadron Collider Open Network Environment Network) is developing techniques for large scale high performance data transfer preparing for the High Luminosity LHC.
  • Large Synoptic Survey Telescope (LSST) Prototype Service Testbed
    StarLight has supported testbeds preparing for data flows a (Data Transfer Node based service) from the Large Synoptic Survey Telescope (LSST) in Chili to the StarLight Facility in Chicago
  • NASA Goddard Space Center High End Computing Network (HECN) Testbed
    StarLight supports the NASA Goddard Space Center High End Computing Network (HECN) Testbed, which is developing new 400 Gbps/800 Gbps/Tbps WAN capabilities for a wide variety of sciences, including atmospheric sciences, space explorations, and geosciences. StarLight established this partnership with NASA, which has to process and exchange of increasingly vast amounts of scientific data, to address issues of transporting large amounts of data over WANs. NASA networks must scale to increasing performance, with 400/800 Gigabit per second (Gbps) WAN/LAN networks the current challenge being addressed, with a plan to address Tbps in the future. The NASA Goddard High End Computer Networking (HECN) team is developing systems and techniques to achieve near 400G line-rate disk-to-disk data transfers between a pair of high performance NVMe Servers across national WAN network paths, by utilizing NVMe-oF/TCP technologies to transfer the data between the servers' PCIe Gen4 NVMe drives. These techniques are being explored and tested on a national scale testbed, including the WAN testbeds created by SCinet for the IEEE/ACM International Conference on High Performance Computing, Networking, Storage and Analytics.
  • Naval Research Lab Resilient Distributed Processing and Rapid Data Transport Testbed
    The NLR Resilient, Distributed Processing And Rapid Data Transport (RDPRD) Testbed was designed and implemented to investigate large-scale Interconnected and interlocking problems that demand a high-performance dynamic distributed data-centric infrastructure, including a close integration of high-performance WAN transport, HPC compute facilities, high-performance storage, and sophisticated data management.
  • PetaTrans NVMe-over-Fabrics As a Microservice Testbed
  • Network Optimized for the Transport of Experimental Data (NOTED) Testbed
    The StarLight Consortium has been participating in an international initiative led by the CERN LHC networking group to develop a capability entitled Network Optimized for Transfer of Experimental Data (NOTED) for potential use by the Large Hadron Collider (LHC) networking community. The goal of the NOTED project is to optimize transfers (including by using AI/ML/DP techniques, of LHC data among sites by addressing problems such as saturation, contention, congestion, and other impairments. To support the research components of this initiative iCAIR has collaborated with multiple other organizations to design and implement an international NOTED testbed.
  • OMNInet Optical Metro Network Initiative Testbed (Metro Optical-Fiber Fabric and Co-Lo Facilities for L0/L1/L2 Experimentation)
    OMNInet is a large-scale collaborative experimental network established to evolve metro digital communication services to ensure they are high-capacity, high-performance, highly scalable, reliable, and manageable at all levels. OMNInet was designed to assess and validate next-generation optical technologies, architecture, and applications in metropolitan networks. Communications architecture based on complex core facilities is optimal when 80% of information flows are local. However, much Internet traffic consists of remote access. These patterns require new types of architecture and engineering. On this optical metro testbed, a research partnership has conducted trials of photonic-based GE services based on innovative optical transport switching incorporating photonic-based components, architecture, and techniques. OMNInet testbed services are based on new photonic-based components, architecture, and techniques supporting multiple interconnected lightwave (lambda) paths within fiber strands. OMNInet is based on Dense Wave Division Multiplexing (DWDM), which allows transmitting multiple data streams to travel over the same pair of fiber by utilizing different colors of light (or light frequencies. Each frequency simultaneously communicates data - substantially increasing fiber capacity. OMNInet is based on a wide range of architecture emerging from multiple standards organizations, including the ITU, IETF, and IEEE. New techniques for traffic engineering are being explored on this testbed, especially those that can take advantage of architectural models that are more distributed than hierarchical. OMNInet allows research on core optical components, e.g., multi-protocol, integrated DWDM, experiments with new technologies and techniques (including IP control planes using GMPLS, which employs a signaling overlay architecture), testing and analysis, and new protocols. OMNInet employs Internet protocols and mesh architectures to provide reliability through redundancy, automatic restoration, optimization through traffic management, pre-fault diagnostics for trouble avoidance, granulated service definition, etc. Key components are adjustable lasers and minute mirrors that control light wavelengths to route traffic. The original optical switches were not commercial products but unique designs based on MEMs-supported O-O-O devices. OMNInet research projects have included A) Trials of highly reliable, scalable 400/800 GE in metropolitan and wide area networks. Ethernet is the global standard for local area networks (LANS) that connect today's computing devices. DWDM supports performance many times faster than current standards and can extend the network throughout metropolitan areas (MANs) and between cities (WANs). B) Trials of new technologies to support applications that require extremely high levels of bandwidth. C) Development and trials of optical switching, ensuring maximized capabilities in the wide-scale deployment of all-photonic networks. D) Trials of new techniques that allow for application signaling to optical network resources E) Experiments with new types of advanced networking middleware that make networks more intelligent F) Trials of Multi-Service Optical Networking (MON) as a dedicated point-to-point network service enabling interconnections among sites as well as mirroring data and transmitting large quantities of information at high performance. OMNInet has implemented core optical switches at StarLight, interconnected by dedicated optical fiber with multiple other devices to allow flexible L1 and L2 interconnectivity. Core nodes are connected to computational clusters at various sites and other testbeds. OMNInet uses StarLight resources to extend experiments nationally and internationally. OMNInet is being used to support quantum networking research in partnership with Argonne National Laboratory and Fermi National Accelerator Laboratory.
  • Open Science Data Cloud Testbed (OCT)
    The Open Science Data Cloud Testbed (OCT) is operated by the Open Cloud Consortium (OCC), OCT is a national-scale 100 Gbps testbed for data-intensive science cloud computing – addressing large data streams, unlike other cloud architectures that are oriented toward millions of small data streams.
  • Prototype 1.6 Tbps WAN Service Testbed
    StarLight supports prototype 1.6 Tbps WAN service prototypes based on two 800 Gbps long distance optical channels.
  • SCItags International Science Packet/Flow Marking Testbed
    With multiple research partners led by CERN, StarLight is participating in a research project exploring techniques and technologies for managing large-scale scientific workflows over networks.
    One technique is using Scientific network tags (scitags), an initiative promoting the identification of the science domains and their high-level activities at the network level. This task is becoming increasingly complex, especially as multiple science projects share the same foundation resources simultaneously yet are governed by multiple divergent variables: requirements, constraints, configurations, technologies, etc. A key method to address this issue is employing techniques that provide high-fidelity visibility into exactly how science flows utilize network resources end-to-end. To develop these services, architecture, techniques, and technologies with its research partners, StarLight and its research partners (including NA REX) has designed and implemented an international Scitags Packet Marking Testbed at 1.2 Tbps between the University of Victoria in Canada and the StarLight Facility, using 3 400 Gbps NA REX channels.
  • SDX Interoperability Prototype Service Testbed
    StarLight is collaborating with other R&E open exchanges on developing WAN interoperability services for data intensive sciences.
  • Intelligent Network Services for Science Workflows (SENSE) Testbed
    With multiple international partners, StarLight is participating in the development of the Intelligent Network Services for Science Workflows (SENSE) Testbed as well as in research experiments being conducted on that testbed. SENSE is a multi-resource, multi domain orchestration system which provides an integrated set of network and end-system services. (SENCE is closely integrated with AutoGOLE). Key research topics include technologies, methods and a system of dynamic Layer 2 and Layer 3 network services to meet the challenges and address the requirements of the largest data intensive science programs and workflows. SENSE services are designed to support multiple petabyte transactions across the global, with realtime monitoring/troubleshooting, using a persistent testbed spanning the US, Europe, Asia Pacific and Latin American regions. A particularly important area of investigation is the potential for integration of SENSE with the Rucio/File Transfer Service (FTS)/XRootD data management and movement system is the key infrastructure used by LHC experiments and more than 30 other programs in the Open Science Grid. Recent features include an ability for science workflows to define priority levels for data movement operations through a Data Movement Manager (DMM) that translates Rucio-generated priorities into SENSE requests and provisioning operations.
  • SEAIP DTN-as-a-Service Testbed
    As part of a collaboration with SEAIP (Southeast Asia International joint-research and training Program), StarLight supports the designing and implementing of an international Southeast Asia DTN testbed directed at creating a Data Mover Service supported by multiple sites in countries through Southeast Asia. Another goal is to participate in the Supercomputing Asia Conference Data Mover Challenge.
  • VTS Testbed Isolated Overlay Topologies
    StarLight provides support for the University of Houston’s Virtual Transfer Services (VTS) Testbed, which is an SDN offering on the NSF’s Global Environment for Network Innovations (GENI), providing a VTS Aggregate Manager for GENI. VTS enables experimenters to create isolated overlay topologies based on programmable datapath elements and labeled circuit services to provide inter-domain connectivity and L2 topologies, including label isolation, common ethertypes, MAC, IPv4, IPv6 addresses, implementation and measurement of performance Isolation and enabling provision of exclusive control and management of the topologies.