Emblem Sub Level Top
Emblem Sub Level Logo Earlier Testbeds
Emblem Sub Level Bottom
  • Advanced Networking Initiative
    StarLight participated in the Department of Energy’s (DOE) Advanced Networking Initiative, which developed a prototype 100 Gbps production-ready science network that included a 100 Gbps experimental testbed and a national dark-fiber testbed. These testbeds were used by researchers from government institutions, universities and industry to experiment with innovative network technologies without interfering with production traffic.
  • AMIS Testbed
    StarLight supported a testbed researching Advanced Measurements Instrument and Services for Programmable Network Measurement of Data-intensive Flows (AMIS).
  • BRIDGES
    StarLight collaborated with the NSF BRIDGES initiative (Binding Research Infrastructures for the Development of Global Experimental Science), an NSF funded high-performance network testbed connecting research programs in the United States and Europe. BRIDGES provides a flexible 100Gbps trans-Atlantic backbone ring connecting Washington DC, Paris, Amsterdam, and New York City. These four locations provide easy access to research programs in US and EU enabling them to establish end-to-end network infrastructure between and among collaborating research teams on two continents. BRIDGES is developing and deploying advanced network programmability software to deliver rapid reconfiguration, predictability, and repeatable service provisioning, seamless multi-domain scalability, and advanced APIs that enable cyber-infrastructure control and orchestration through automated agents.
  • DataTAG
    StarLight was a research partner in the EU-funded DataTAG project. The StarLight facility included a key node of the DataTAG project, which was one of several major international Grid development projects established both within the European Community, and in the US. These projects were created to progress toward the common goal of providing transparent access to the massively distributed computing infrastructure that is needed to meet the challenges of modern data intensive applications. The DataTAG project created a large-scale intercontinental Grid testbed that focused upon advanced networking issues and interoperability between multiple intercontinental Grid domains, extending the capabilities of each and enhancing the worldwide program of Grid development. The project addressed issues that arise in high performance inter-Grid networking, including sustained and reliable high performance data replication, end-to-end advanced network services, and novel monitoring techniques. The project also directly addressed the issues of interoperability between the Grid middleware layers such as information and security services.
  • Digital Research Alliance of Canada Research Testbeds
    The non-profit Digital Research Alliance of Canada provides services to Canadian researchers to advance the nation’s international leadership in the knowledge economy. Key focal areas are advanced research computing (ARC), research data management (RDM) and research software (RS), which collectively provide a resource platform for multiple research communities. To stage demonstrations of advanced services for computational science, the Digital Research Alliance, CANARIE, the national R&E network of Canada, iCAIR, the StarLight consortium, the Metropolitan Research and Education Network (MREN), and SCinet collaborate to create an international testbed to stage demonstrations at the annual IEEE/ACM International Conference on High Performance Computing.
  • Distributed Optical Testbed (DOT)
    StarLight supported the NSF funded Distributed Optical Testbed initiative (DOT). DOT was designed and implemented by an inter-organizational cooperative research partnership to facilitate the research and development of innovative techniques that require the efficient execution of distributed applications. The DOT research partners developed innovative techniques required by high performance next generation applications, which are being designed to take advantage of new types of information technology infrastructure, including Grid computing, advanced middleware, such as Globus, and leading-edge optical networks.
  • DWDM-RAM Testbed
    Supported by StarLight and with funding from DARPA, iCAIR and Nortel Advanced Technology division established a project that developed and demonstrated a novel architecture for data intensive services supported by distributed infrastructure based on optical networks with inherent dynamic lightpath provisioning. This type of architecture ("DWDM-RAM") can be used by multiple data intensive application communities. The architecture was designed for optimized, fault-resilient, dynamic management of services supporting large, n-way replicated immutable data objects over a large-scale MAN/LAN optical network testbed (OMNInet), interconnecting Grid computational clusters. The DWDM-RAM architecture is innovative in several respects. For example, it closely integrates application-level data resources with DWDM optical resources, resulting in high-performance and highly scalable data migration and management, for example, through optimal integrated data discovery and transfer operations. Like other OMNInet services innovations, this approach combines data services and dynamic wavelength-switched layer. Using this technique, high volumes of distributed data can be transferred in parallel using resources such as discovered light paths, data repository locations, and local and remote I/O capacity, replication sites, etc. Also, the DWDM-RAM architecture provides for a migration path, as a supplemental to services based on traditional performance-limited, limited layer 3 routing protocols. A prototype implementation of the DWDM-RAM architecture integrated high volume high performance data services with dynamically switched wavelength optical networking, and demonstrated : 1) content-addressed data retrieval, 2) a meshed DWDM switched network capable to establish an end-to-end lightpath in seconds, 3) an signaling function between the application and the DWDM network, to allow the integration of application metadata and network metadata, 4) discovery functions operating on the combined application and network meta-data, 5) large scale data-transfer facilities exploiting circuit-switched networks, and 6) out-of-band functions for adaptive placement of data replicas. The architecture can be expandable to include additional functionality, to include enhanced file systems semantics.
  • EMERGE Differentiated Services On a Regional Network Testbed
    StarLight participated in the design, development, and operation of the EMERGE Testbed – Differentiated Services On A Regional Network, which deployed advanced differentiated services across autonomous networks when priority flows represented a small fraction of available capacity and when priority flows represented a significant fraction of available capacity. Motivating applications included those in Combustion, Climate Studies, and High Energy Physics.
  • EnLIGHTened
    StarLight supported a research partnership with the EnLIGHTened project, which investigated dynamic, adaptive, coordinated and optimized use of networks connecting geographically distributed high-end computing and scientific instrumentation resources for high performance real-time problem resolution. The NSF-funded EnLIGHTened project was a collaborative interdisciplinary research initiative that researched the integration of optical control planes with Grid middleware under highly dynamic requests for heterogeneous resources.
  • ExoGENI: ESnet–StarLight–ANA-n*100G–SURFnet/NetherLight 100/40 GbpsTestbed
    StarLight supported the ExoGENI component of the NSF Global Environment for Network Innovations (GENI) initiative, which created a highly programmable national experimental research testbed. iCAIR implemented a core ExoGENI node at the StarLight Facility, which was used for a wide range of experiments.
  • GEMnet
    StarLight supported a partnership with NTT Laboratories to conduct network research projects. Initially, NTT successfully operated two experimental networks, GALAXY and GEMnet to explore the effectiveness of ultra-high-speed communications technologies when applied to advanced scientific research and for research and development of global information-sharing services respectively. Later to adapt to the new technological and application's requirements, NTT formulated a new testbed concept, GEMnet2, combining the two separated experimental networks which were conceived for two different purposes and completed the first phase construction of the new testbed this year. Its three key aims are test technologies for every aspect of communications, provide very wide bandwidth that can accommodate very fast applications without any restraints, and promote collaboration with other research networks. NTT used DWDM/CWDM technologies to build multiple circuits between NTT's R&D centers in Musashino, Yokosuka and Atsugi and collaborative national research institutes including NII, NICT and NAOJ. GEMnet2 is also connected to SINET and the US with circuits to facilitate international experiments requiring very wide bandwidth.
  • Global Environment for Network Innovations (GENI)
    From its inception, the StarLight Consortium was a participant in the NSF funded (CISE/CNS), Global Environment for Network Innovations (GENI), a unique virtual laboratory for at-scale networking experimentation envisioning future internets. The GENI mission was to: a) Enable repeatable experiments on large, complex, networked systems; b) Enable transformative research at the frontiers of network science and engineering; and c) Inspire and accelerate the potential for groundbreaking innovations of significant socio-economic impact. As part of this project, StarLight supported the implementation of 34 InstaGENI nodes on 34 different campuses and also hosted one, along with an ExoGENI node, at the StarLight facility. StarLight also supported staged demonstrations at the GENI Engineering conferences.
  • Global Lambda Integrated Facility (GLIF)
    StarLight was a founding member of the Global Lambda Integrated Facility (GLIF) initiative, which established an international organization to develop and promote new concepts, methods, and technologies based on international lightpath (lightwave/lambda) networking. This organization was comprised of National Research and Education Networks (NRENs), networking consortia, corporations, universities, and other institutions. The majority of the institutions involved supported flexible lightpath provisioning. GLIF participants provided testbed lightpaths internationally to support multiple research and development initiatives directed at creating new international communication services. As a global integrated facility, the GLIF initiative supported data intensive scientific research, optical middleware development, the creation of new network management techniques, and specialized testbeds.
  • Grid Distributed Computation Network Testbeds
    The StarLight consortium participated in designing, implementing, and operating multiple Grid networking testbeds. The development and deployment of computational Grids focused on creating a more seamless and direct means of utilizing multiple types of resources within a dynamic environment. The majority of Grid projects have been directed solving complex problems, which are bandwidth, data, and compute cycle intensive, such as the types of problems encountered by large scale e-Science. Initially, Grids were based on static, routed networks. StarLight and its research partners developed methods that enabled network resources to become first class entities within Grid environments, that is, controllable by other Grid processes like any other resource. In particular, the StarLight consortium developed new methods for allowing lightpaths within Grid environments to be control, so that network topologies can be dynamically reconfigured. For the advanced Grid architectures, using these techniques transform communication services within a distributed environment. For example, instead of being implemented as a standard communication mechanism, the network becomes a high performance backplane for highly distributed computational resources.
  • InstaGENI
    The StarLight Consortium supported the InstaGENI initiative as a foundation of the NSF Global Environment for Network Innovations (GENI) project, one of two rack deployment efforts. InstaGENI was a collaborative partnership that included HP, Northwestern, Princeton, the University of Utah, and the Open Networking Institute, InstaGENI’s 34 racks featured a lightweight, expandable cluster design incorporating integration with the FlowVisor OpenFlow Aggregate Manager (FOAM), ProtoGENI and PlanetLab Aggregate Managers, with L2 connectivity among national research networks. InstaGENI racks also federated with existing Slice Authorities such as GPO GENI, ProtoGENI, and PlanetLab Central, enabling researchers in these communities to allocate resources across the InstaGENI deploy.
  • GENI Experiment Engine (GEE)
    The StarLight consortium supported the "GENI Experiment Engine" (GEE) initiative, a key component of the Global Environment for Network Innovations (GENI) project, a US-funded research platform for experimenting with future internet technologies. GEE provided a simplified, Platform-as-a-Service (PaaS) layer on top of GENI's underlying distributed infrastructure (IaaS) to let researchers quickly allocate resources, deploy applications (like network protocols or distributed systems), run experiments, collect results, and tear them down within minutes, making experimentation faster and easier.
  • Global Ring Network for Advanced Application Development (GLORIAD)
    StarLight participated in the NSF funded Global Ring Network for Advanced Application Development initiative, which implemented an international facility to support scientists world-wide with advanced networking services and technologies for enhanced communications and data exchange, active collaboration, and integrated processes. GLORIAD supported large scale applications support, communication services, large scale data transport, access to unique scientific facilities, including Grid environments, and specialized network based tools and technologies for diverse communities of scientists, engineers, and other researcher domains. GLORIAD was a partnership among the US, China, Russia, Canada, the Netherlands, Korea, Denmark, Finland, Iceland, Norway, and Sweden. GLORIAD has implemented capabilities for providing optical paths within its environment for specialized functions, such as experimental research and demonstrations.
  • High Performance Digital Media Network Testbed (HPDMnet)
    StarLight participated in the international HPDMnet digital media testbed. Support for digital media has been one of the fastest growing requirements of the Internet as demand transitions from services designed to support primarily text and images to those intended also to support rich, high quality streaming multi-media. In response to the need to address this important 21st century communications challenge, an international consortium of network research organizations established an initiative, the High Performance Digital Media Network (HPDMnet), to investigate key underlying problems, to design potential solutions, to prototype those solutions on a global experimental testbed, and to create an initial set of production services. The HPDMnet service was being designed not only to support general types of digital media but also those based on extremely high resolution, high capacity data streams. These HPDMnet services, which are based on a wide range of advanced architectural concepts at all layers, provide a framework for network middleware that allows nontraditional resources to enable new network services, including those based on dynamically provisioned international lightpaths supported by flexible optical-fiber and optical switching technology. These HPDMnet services have been showcased at major national and international forums, and they have been implemented within several next generation open communications exchanges.
  • International BigData Express Testbed
    The StarLight consortium supported an international multi organization collaboration, led by Fermi National Accelerator Laboratory, to create a scalable and high-performance data transfer platform for global data intensive science. Big data has emerged as a driving force for scientific discoveries. To meet data transfer challenges in big data era, DOE’s Advanced Scientific Computing Research (ASCR) office funded the BigData Express project. BigData Express is targeted at providing schedulable, predictable, and high-performance data transfer service for DOE’s large-scale science computing facilities and their collaborators. Through a series of experiments and demonstrations, BigData Express services used specialized software to demonstrate efficient bulk data movement over wide area networks. The following features in BigData Express were demonstrated:
    1. peer-to-peer, scalable, and extensible model for data transfer services
    2. a visually appealing, easy-to-use web portal
    3. high-performance data transfer engine
    4. orchestrating and scheduling of system (DTN), storage, and network (SDN) resources involved in the file transfers
    5. On-Demand provisioning of end-to-end network paths with guaranteed QoS
    6. robust data transfer services provisioning through strong error handling mechanisms
    7. safe and secure data transfer services by using multiple security mechanisms
    8. interoperation between BigData Express and SENSE
    9. integration of BigData Express with scientific workflows
    Collaboration participants included Fermi National Accelerator Laboratory, iCAIR, the StarLight Consortium, the Metropolitan Research and Education Network (MREN), UMD, StarLight, KISTI, KSTAR, SURFnet, Ciena, Pacific Wave, AmLight, the Pacific Research Platform, National Research Platform, and Global Research Platform.
  • Illinois Wired/Wireless Infrastructure for Research and Education (I-WIRE)
    The StarLight Facility supported the core nodes for the Illinois Wired/Wireless Infrastructure for Research and Education (I-WIRE) project, which developed a state-wide private optical fiber based network to support advanced scientific research and engineering. This project deployed a first-of-its-kind dark fiber across Illinois to support computationally and data intensive advanced applications in physics, chemistry, biology, high performance computing, data mining, astrophysics, scientific visualization, and others. This fabric supported the NSF TeraGrid project - the Distributed Terascale Facility.
  • JGN-X
    For many years, the StarLight Facility supported the Japanese JGN testbed project. The National Institute of Information and Communications Technology (NICT) of Japan initiated the testbed as the Japan Gigabit Network (JGN). Later it was redesigned as the JGN2plus and then the JGN-X (JGN eXtreme) project. A related collaboration was with the large scale emulation environment, StarBED3 to enable the testbed network to be a general testbed network enabling experiments from emulation to wide range network experiments.
  • Multi-Mechanisms Adaptation for The Future Internet (MAKI), US-EU
    StarLight supported an international grand challenge initiative in communication systems, addressing an increase in dynamics and variations of the conditions in which they operate, the constantly shifting use cases, and the growing quality requirements. StarLight and the DFG CRC (Collaborative Research Centre of the German Science Foundation - Deutsche Forschungsgemeinschaft, DFG) MAKI addressed this challenge with the goal to enable automated transitions between functionally equivalent mechanisms in communication systems at runtime. It includes the coordination of multiple concurrent transitions, which influence each other. CRC MAKI is positioned in the context of the international research efforts towards a “future Internet.”
  • mdtmFTP High Performance Transport Tool Testbed
    The StarLight Consortium supported an initiative led by Fermi National Accelerator Laboratory (FNAL) established to address challenges in high performance data movement for large-scale science. The FNAL network research group developed mdtmFTP, a high-performance data transfer tool to optimize data transfer on multicore platforms. mdtmFTP has a number of advanced features. First, it adopts a pipelined I/O design. Data transfer tasks are carried out in a pipelined manner across multiple cores. Dedicated threads are spawned to perform network and disk I/O operations in parallel. Second, mdtmFTP uses multicore-aware data transfer middleware (MDTM) to schedule an optimal core for each thread, based on system configuration, in order to optimize throughput across the underlying multicore core platform. Third, mdtmFTP implements a large virtual file mechanism to efficiently handle lots-of-small-files (LOSF) situations. Finally, mdtmFTP unitizes optimization mechanisms such as zero copy, asynchronous I/O, batch processing, and pre-allocated buffer pools, to maximize performance.
  • National Lambda Rail (NLR)
    The StarLight Consortium supported the design, development, and implementation of the National Lambda Rail (NLR), a US national distributed facility, which had a foundation consisting of leased optical fiber. This facility was deliberately designed and implemented as a facility not as a network. Consequently, it could support many different types of networks, including advanced experimental research networks. The NLR supported these networks using a common core infrastructure. Half of the capacity of the NLR was devoted to advanced research, related to fundamental technology research but also to such topics as new methods for supporting science applications. The NLR implemented core nodes at the StarLight Facility.
  • Optical Dynamic Intelligent Network (ODIN) Testbed
    The StarLight Consortium supported the development of the Optical Dynamic Intelligent Network (ODIN) experimental architecture, created to explore new techniques for lightpath provisioning, in particular as a mechanism for bringing directly into applications capabilities that traditionally are placed deep within the core of networks. This new method of closely integrating edge processes with foundation network resources is a fundamental departure from current implementations. This approach enables networks to be much more powerful, flexible, scalable and manageable than they have been previously. ODIN extensions also allow for electronic circuit provisioning, e.g., using vLANs. (Ref: "Optical Dynamic Intelligent Network Services (ODIN): An Experimental Control Plane Architecture for High Performance Distributed Environments Based On Dynamic Lightpath Provisioning, IEEE Communications, March 2006).
  • OptIPuter
    StarLight was a research participant in the National Science Foundation-funded OptIPuter project. The OptIPuter was a national and international distributed facility that closely related multiple IT components, including optical networking, Internet Protocol (IP), high performance computational clusters, computer storage, and visualization technologies. This infrastructure envisioned tightly couple computational resources connected over parallel optical networks using the IP protocol. The OptIPuter exploited a new world in which the central architectural element is optical networking, not computers - creating "supernetworks". This paradigm shift required large-scale applications-driven, system experiments and a broad multidisciplinary team to understand and develop innovative solutions for a "LambdaGrid" world. The goal of this new architecture was to enable scientists who are generating terabytes and petabytes of data to interactively visualize, analyze, and correlate their data from multiple storage sites connected to optical networks.
  • Pacific Research Platform (PRP)
    The StarLight Consortium was a founding member of the Pacific Research Platform (PRP), which integrated Science DMZs, an architecture developed by the U.S. Department of Energy’s Energy Sciences Network (ESnet), into a high-capacity regional “freeway system.” This system makes it possible for large amounts of scientific data to be moved between scientists’ labs and their collaborators’ sites, supercomputer centers or data repositories, even in the cloud.
  • Photonic Data Services
    The StarLight Consortium supported the Photonic Data Services initiative, a partnership that included iCAIR at Northwestern, National Center for Data Mining at UIC, and the Laboratory for Advanced Computing at UIC developed new methods for integrating high performance data management techniques with advanced methods for dynamic lightwave provisioning. These techniques are termed "Photonic Data Services." These services combine data transport protocols developed at the UIC research centers with wavelength provisioning protocols developed at iCAIR, such as the Optical Dynamic Intelligent Networking protocol (ODIN). Researchers at iCAIR and NCDM conducted a series of tests to ensure optimal performance of a variety of network components and protocols, such as TCP and UDP, including testing methods using services for parallel TCP striping (GridFTP). Researchers at the NCDM used multiple testbeds, including the national TeraFlow Network, the state-wide I-WIRE network, and the metro area OMNInet, to test protocols that allowed for the design of network based applications with reliable end-to-end performance and speeds that scale to multiple-Gbps. These protocols include UDT, which are open source libraries to build network applications with advanced functionality. NCDM's UDT is an innovative protocol that uses UDP as a transit protocol but provides for reliability by using TCP as a control protocol. NCDM and iCAIR have used Photonic Data Services demonstrations to set a new high performance record for trans-Atlantic data transit.
  • Simple Path Control (SPC) Protocol Testbed
    The StarLight Consortium supported a research testbed that created the Simple Path Control protocol, a signaling mechanism that allows for edge processes, including applications, to communicate requirements for specific paths through a network by signaling to a server capable of establishing such paths using core network resources. When such a request is signaled, the server identifies appropriate path through the network topology based on information about resource availability. It then configures an appropriate topology and informs the edge process that the paths are ready for use. SPC also provides a function for explicitly releasing paths that are no longer needed. This protocol was submitted as a draft to the IETF.
  • TransLight
    StarLight was a foundation resource facility for the TransLight project, funded by the National Science Foundation, which developed an advanced global network for large scale e-Science. Other partners includde NetherLight (Netherlands), UKLight (Britain), CANARIE (Canada), CERNnet (CERN, the European Union particle physics laboratory network), GEANT (European Union), GLORIAD, and TransPAC (Asia Pacific networks, including those that are coordinated through the Asia Pacific Advanced Networking organization (APAN).
  • Ultralight
    The StarLight Consortium supported the UltraLight research project, which had a core hub at the StarLight facility, consisted of an initiative established by a partnership of network engineers and experimental physicists developing new methods for providing the advanced network services required to support the next generation of high energy physics. The physics research community has been designing and implementing new experimental facilities and instrumentation that will be the foundation for the next frontier of fundamental physics research. These facilities and instruments generate multiple petabytes of particle physics data that are analyzed by physicists world-wide. A key focus of the UltraLight project was the development of capabilities required for this petabyte-scale global analysis. The project also developed novel monitoring tools for distributed high performance networks based on the MonALISA project. These efforts were undertaken in partnership with CERN, Europe's particle physics laboratory (located in Switzerland), which is one of the world's largest generator of scientific data, hundreds of petabytes per year.
  • UltraScience Network
    StarLight was collaborative partner to the UltraScience Network, a Department of Energy (DOE) experimental research wide-area testbed that was designed to enabling the next generation of DOE large-scale, highly distributed science projects, which have high performance and flexibility requirements that cannot be supported by traditional networking. These applications require a wide range of advanced capabilities. The Ultra Science Network has been instantiated as a prototype of a new national network that leverages advanced optical networking technologies. UltraScience Network provides on-demand dedicated bandwidth channels at multi, single and sub lambda resolutions between its edges. Various types of protocol, middleware and application research projects can make use of the dedicated channels provisioned. The UltraScience Network implemented a core hub at the StarLight facility.