Emblem Sub Level Top APPLICATIONS: Computer & Information Sciences
Advanced Networking Infrastructure & Research
Emblem Sub Level Logo Bandwidth Challenge from the Low-Lands
Emblem Sub Level Bottom
BandwidthChallenge from the Low-Lands

Bandwidth Challenge The avalanche of data already being generated by and for new and future High Energy and Nuclear Physics (HENP) experiments demands new strategies for how the data is collected, shared, analyzed and presented. For example, the SLAC BaBar experiment and JLab are each already collecting over a TB/day, and BaBar expects to increase by a factor of two in the coming year. SLAC and Fermilab’s CDF (Collider Detector at Fermilab) and D0 experiments have already gathered well over a petabyte of data, and the CERN Large Hadron Collider (LHC) experiment expects to collect over 10-million terabytes.

The strategies being adopted to analyze and store this unprecedented amount of data is the coordinated deployment of Grid technologies, such as those being developed for the Particle Physics Data Grid (PPDG) and the Grid Physics Network (GriPhyN). It is anticipated that these technologies will be deployed at hundreds of institutes that will be able to search out and analyze information from an interconnected worldwide grid of tens of thousands of computers and storage devices. This in turn will require the ability to sustain, over long periods, the transfer of large amounts of data among collaborating sites with relatively low latency.

This project is designed to demonstrate the current data-transfer capabilities to several sites worldwide that have high-performance links. In a sense, the iGrid 2002 site is acting like a HENP Tier 0 or Tier 1 site (an accelerator or major computation site) in distributing copies of raw data to multiple replica sites. The demonstration is over real live production networks with no efforts to manually limit other traffic. The results are displayed in real time. Researchers investigate / demonstrate issues regarding TCP implementations for high-bandwidth long-latency links, and create a repository of trace files of a few interesting flows. These traces, valuable to projects like DataTAG, help explain the behavior of transport protocols over various production networks.

Acknowledgment: This demonstration uses SURFnet/StarLight, Internet2, ESnet, JANET, GARR, Renater2, Japanese wide-area networks and the EU DataTAG link between CERN and StarLight. Work is sponsored by the USA Department of Energy (DoE) HENP program; USA DoE Mathematics and Information Computing Sciences (MICS) office; USA National Science Foundation; Particle Physics Data Grid; International Committee for Future Accelerators; and, the International Union of Pure and Applied Physics.

Contact
Antony Antony
Dutch National Institute for Nuclear Physics and High Energy Physics (NIKHEF), The Netherlands
antony@nikhef.nl

R. Les Cottrell
Stanford Linear Accelerator Center (SLAC), USA
cottrell@slac.stanford.edu

Collaborators
Participating remote sites each have one or more UNIX hosts running Iperf and BBFTP servers:
Ayumu Kubota, Asia Pacific Advanced Network (APAN) consortium, Japan
Linda Winkler, William E. Allcock, Argonne National Laboratory (ANL), USA
Dantong Yu, Brookhaven National Laboratory (BNL), USA;
Harvey Newman, Julian J. Bunn, Suresh Singh, California Institute of Technology (Caltech), USA
Olivier Martin, Sylvain Ravot, CERN, Switzerland
Robin Tasker, Paul Kummer, Daresbury Laboratory, UK
Jim Leighton, ESnet, USA
Ruth Pordes, Frank Nagy, Phil DeMar, Fermi National Accelerator Laboratory (Fermilab), USA
Andy Germain, George Uhl, NASA Goddard Space Flight Center (GSFC), USA
Jerome Bernier, Dominique Boutigny, Institut National de Physique Nucléaire et de Physique des Particules (IN2P3), France
Fabrizio Coccetti, Istituto Nazionale di Fisica Nucleare (INFN), Milan, Italy
Emanuele Leonardi, INFN, Rome, Italy
Guy Almes, Matt Zekauskas, Stanislav Shalunov, Ben Teitelbaum, Internet2, USA
Chip Watson, Robert Lukens, Thomas Jefferson National Accelerator Facility (JLab), USA
Yukio Karita, Teiji Nakamura, KEK High Energy Accelerator Research Organization, Japan
Wu-chun Feng, Mike Fisk, Los Alamos National Laboratory (LANL), USA
Bob Jacobsen, Lawrence Berkeley National Laboratory (LBNL), USA
Shane Canon, LBNL National Energy Research Scientific Computing Center (NERSC), USA
Richard Hughes-Jones, Manchester University, UK
Antony Antony, NIKHEF, The Netherlands
Tom Dunigan, Bill Wing, Oak Ridge National Laboratory (ORNL), USA
Richard Baraniuk, Rolf Riedi, Rice University, USA
Takashi Ichihara, The Institute of Physical and Chemical Research (RIKEN) / RIKEN Accelerator Research Facility (RARF), Japan
John Gordon, Tim Adye, Rutherford Appleton Laboratory (RAL), Oxford, UK
Reagan Moore, Kevin Walsh, Arcot Rajasekar, San Diego Supercomputer Center, University of California, San Diego, USA
Les Cottrell, Warren Matthews, Paola Grosso, Gary Buhrmaster, Connie Logg, Andy Hanushevsky, Jerrod Williams, Steffen Luitz, SLAC, USA
Warren Matthews, Milt Mallory, Stanford University, USA
William Smith, Rocky Snyder, Sun Microsystems, USA
Andrew Daviel, TRIUMF, Canada
Yee-Ting Li, Peter Clarke, University College London, UK
Constantinos Dovrolis, University of Delaware, USA
Paul Avery, Gregory Goddard, University of Florida, USA
Thomas Hacker, University of Michigan, USA
Joe Izen, University of Texas at Dallas, USA
Miron Livny, Paul Barford, Dave Plonka, University of Wisconsin, Madison, USA

www-iepm.slac.stanford.edu/monitoring/bulk/igrid2002