Emblem Sub Level Top PUBLICATIONS
Archived Press Releases
Emblem Sub Level Logo SDSC Bandwidth Challenge Winners Demo at SC’03
Emblem Sub Level Bottom
November 28, 2003

SDSC Bandwidth Challenge Winners Demonstrate at SC2003

Four teams affiliated with the San Diego Supercomputer Center (SDSC) won High Performance Bandwidth Challenge awards at SC2003, the annual conference for high-performance computing and communications, in Phoenix, Arizona on November 20. In the Bandwidth Challenge, contestants from science and engineering research communities around the world demonstrate the latest technologies and applications for high-performance networking, many of which are so demanding that no ordinary computer network could sustain them.

At SC2003, the contestants were challenged to “significantly stress” the SCinet network infrastructure while moving meaningful data files across the multiple research networks that connect to SCinet, the conference’s temporary but powerful on-site network infrastructure. Continuing a tradition started at SC2000, Qwest Communications awarded monetary prizes for applications that made the most effective or “courageous” use of SCinet resources.

“Qwest is once again extremely pleased to sponsor the SC conference’s Bandwidth Challenge,” said Wesley K. Kaplow, chief technology officer for Qwest Government Services. “This year’s set of participants have clearly demonstrated that high-performance computing coupled with high-bandwidth networking is the foundation for igniting international innovation and collaboration.”

“While the results are impressive, the challenge is not just about blasting bits across the network,” said Bandwidth Challenge Co-Chair Kevin Walsh. “It’s really about driving science, and this year’s competition clearly illustrates the role of high-performance, high-bandwidth networks in current research in such areas as physics, biology and chemistry, as well as computer science.”

A team from IBM and SDSC won the award for “Best Commercial Tools” for their entry On-Demand File Access over a Wide Area with GPFS. The judges gave this prize for development and use of a commercial system that demonstrated high performance without significant impact on remote systems. The team posted a sustained rate of 8.96 gigabits per second.

Making data sets from one parallel filesystem available to a remote system over a grid can give computational scientists access to geographically distributed resources such as compute and visualization engines. The remote systems do not need to setup elaborate filesystems and provide storage, they merely read from and write to the parallel filesystems that are mounted “locally” over the wide area. Using data sets from the Southern California Earthquake Center, the NPACI Scalable Visualization Toolkit read from these data sets and rendered images over the wide area in real time, using IBM’s General Parallel File System (GPFS), a high performance parallel, scalable filesystem for clustered environments.

“We were extremely pleased with the performance achieved in the distributed file system, which we believe heralds a new paradigm for grid computing,” said Phil Andrews, program director for High Performance Computing at SDSC. “In this approach, data transfers across a wide area network are completely transparent to the user, avoiding any changes to their normal mode of operation.”

The team members were Puneet Chaudhary and Roger Haskin of IBM, and Phil Andrews, Bryan Banister, Haisong Cai, Steve Cutchin, Jay Dombrowski, Patricia Kovatch, Martin W. Margo, Nathaniel Mendoza, Michael Packard, and Don Thorp of SDSC.

A team based at SDSC with participation from Argonne National Laboratory won the award for “Best Tools” for their entry High-Performance Grid-Enabled Data Movement with Striped GridFTP. The award was bestowed for the demo’s emphasis on creating common, standards-based tools that are the building blocks for new applications, and demonstrating their capability with visualization. The team demonstrated striped GridFTP from an application, transferring several files over a terabyte in size between a 40-node grid site at SDSC and a 40-node grid site at SC2003. By harnessing the power of multiple nodes and multiple network interfaces with Striped GridFTP, information was efficiently transferred in parallel across the network at a sustained data rate of 8.94 gigabits per second. The GridFTP file transfer was integrated into the NPACI Scalable Visualization Toolkit for rendering. Applications and datasets included code and files from the Southern California Earthquake Center.

The team members were Phil Andrews, Bryan Banister, Haisong Cai, Steve Cutchin, Jay Dombrowski, Patricia Kovatch, Martin W. Margo, Nathaniel Mendoza, Michael Packard, and Don Thorp of SDSC, and William E. Allcock, John M. Bresnahan, Ian Foster, Rajkumar Kettimuthu, Joseph M. Link, and Michael E. Link of Argonne National Laboratory.

The winner in the category of “Best Application” was Multi-Continental Telescience, a multidisciplinary entry that showcased technology and partnerships encompassing the Biomedical Informatics Research Network (BIRN), NPACI Telescience, OptIPuter, and Pacific Rim Applications and Grid Middleware Assembly (PRAGMA) programs. The prize was awarded on the basis of the demo’s emphasis on the user, interaction with scientific instruments, distributed collaboration, and in particular the ease of use of the system by domain scientists. The demo combined telescience, microscopy, biomedical informatics, optical networking, next-generation protocols, and collaborative research to solve multi-scale challenges in biomedical imaging. Specifically, they demonstrated how network bandwidth and the IPv6 protocol can be effectively used to enhance the control of multiple data-acquisition instruments of different types, to enable interactive multi-scale visualization of data pulled from the BIRN Grid, and to facilitate large-scale grid-enabled computation. The coordinated environment included globally distributed resources and users, spanning multiple locations in the U.S., Argentina, Japan, Korea, the Netherlands, Sweden, and Taiwan. The team posted a sustained rate of 1.13 gigabits per second over international links.

Participants included Steve Peltier, Abel Lin, David Lee, and Mark Ellisman of the BIRN project at the University of California, San Diego (UCSD), Tom Hutton of SDSC, Francisco Capani of Universidad de Buenos Aires, Oleg Shupliakov of the Karolinska Instiute in Sweden, Shinji Shimojo and Akiyama Tokokazu of the Cybermedia Center at Osaka Univeristy, Hirotaro Mori of the Center for Ultra High Voltage Microscopy in Osaka, Fang-Pang Lin of Taiwan’s National Center for High-Performance Computing (NCHC), and the USA division of Japan’s KDDI R&D Labs.

The “Both Directions Award” was won by the Distributed Lustre File System Demonstration. Using the Lustre File System, a large multi-institutional demonstrated both clustered and remote file system access, over local very high bandwidth links between SCinet and the ASCI exhibit, combined with remote access over the 2000 miles between Phoenix and NCSA. Compute nodes in both locations accessed servers in both locations and read and wrote concurrently to a single file and to multiple files spread across the servers. The team achieved a rate of 9.02 gigabits per second. The judges took special note of the fact that this demonstration showed that not all applications move data in only one direction.

The team of academic, government laboratory, and industry partners consisted of Peter Braam, Eric Barton, Jacob Berkma, and Radika Vullikanti of Cluster File Systems; Hermann Von Drateln of Supermicrom; Nic Huang of Acme Microsystems; Danny Caballes, John Szewc, Mike Allen, and Rick Crowell of Foundry Networks; Dave Fellinger, Ryan Weiss, and John Josephakis of Data Direct Networks; Jeff James and Matt Baker of Intel; Leonid Grossman and Marc Kimball of S2io; Vicki Williams and Luis Martinez of Sandia National Laboratories; Parks Fields of Los Alamos National Laboratory; Rob Pennington, Michelle Butler, Tony Rimovsky, Patrick Dorn, and Anthony Tong of the National Center for Supercomputing Applications; Phil Andrews, Patricia Kovatch, and Kevin Walsh of SDSC; Alane Alchorn, Jean Shuler, Keith Fitzgerald, Dave Wiltzius, Bill Boas, Pam Hamilton, Chris Morrone, Jason King, Danny Auble, Jeff Cunningham, and Wayne Butman of Lawrence Livermore National Laboratory.

Other winners of the Bandwidth Challenge competition were:

The Bandwidth Lust team representing Stanford Linear Accelerator Center (SLAC), Caltech, Fermilab, and CERN for “Sustained Bandwidth” - otherwise known as the “Moore’s Law Move Over!” award - in their demonstration of Distributed Particle Physics Analysis using Ultra High Speed TCP on the GRiD. The judges believed that this entry demonstrated the best vision and articulation of the need for high performance networks to serve science. The team moved a total of 6551.134 gigabits of data, reaching 23.23 gigabits per second. Team members were Harvey Newman, Julian Bunn, Sylvain Ravot, Conrad Steenberg, Yang Xia, Dan Nae, Caltech; Les Cottrell, Gary Buhrmaster, SLAC; Wu-chun Feng, LANL; Olivier Martin, CERN / DataTAG.

Project DataSpace, in the category of “Application Foundation,” for using a Web service framework integrated with high-performance networking tools to provide an application foundation for the use of distributed datasets. The high sustained data rate was 3.66 gigabits per second. Team members were Robert L. Grossman, Yunhong Gu, David Hanley, Xinwei Hong, Michal Sabala, University of Illinois at Chicago; Joe Mambretti, Northwestern University; Cees de Laat, Freek Dijkstra, Hans Blom, University of Amsterdam; Dennis Paus, SURFNet; Alex Szalay, John Hopkins University; and Nagiza F. Samatova and Guru Kora, Oak Ridge National Laboratory.

Transmission Rate Controlled TCP on Data Reservoir, University of Tokyo, in the category of “Distance Bandwidth Product and Network Technology,” for attention to the details of controlling multiple gigabit streams fairly over extremely long distances. The demo achieved very high average pipe utilization of over 65 percent with real disk-to-disk transfer with real disk-to-disk transfer with a high sustained rate of 7.56 gigabits per second. Team members were Mary Inaba, Makoto Nakamura, Hiroaki Kamesawa, Junji Tamatsukuri, Nao Aoshima, Kei Hiraki, University of Tokyo; Akira Jinzaki, Junichiro Shitami, Osamu Shimokuni, Jun Kawai, Toshihide Tsuzuki, Masanori Naganuma, Fujitsu Laboratories; Ryutaro Kurusu, Masakazu Sakamoto, Yuuki Furukawa, Yukichi Ikuta, Fujitsu Computer Technologies.

Trans-Pacific Grid Datafarm, in the category of “Distributed Infrastructure,” for demonstrating a real geographically distributed file system with high performance over long distance. The system was able to take advantage of multiple physical paths to achieve high performance.

Walsh noted that cutting-edge science carried out on an international scale is pushing currently available bandwidth. Current projections indicate that Grid computing advances will grow in tandem with increases in high-performance, high-bandwidth networks.

Kaplow said that participants this year focused more on data storage and movement than in years past, and there have been significant increases in their capability - especially in the face of problems caused by large geographic distances.

“Next year, we are going to place additional emphasis on applications that use these facilities,” Kaplow said. “Also, we have seen an increase in the use of commercial and standards-based middleware to enable application development, which is key to enabling application writers to focus on their user requirements and less on how to push gigabits across kilometers.”

A graphical representation of each team’s effort, along with detailed statistics on the amount of data transferred, can be found at scinet.supercomp.org/2003/bwc/results/index.html.

The Bandwidth Challenge event was sponsored by SCinet and Qwest Communications as the founding corporate sponsor and by SCinet as the SC2003 Committee sponsor. The Bandwidth Challenge was made possible by vital contributions to the SCinet infrastructure from Force10 Networks, NetOptics, Procket Networks, Level(3) Communications, Sorrento Networks, and Spirent Communications.

In addition to sponsoring the Bandwidth Challenge, Qwest Communications continued its longstanding relationship with SCinet, supporting several significant activities: provisioning 10 gigabits and 2.5 gigabits per second optical network links (lambdas) using Qwest QWave to support connections from the ESnet (Sunnyvale) and Abilene (Los Angeles) POPs to the Los Angeles cross- connect, providing additional 2.5 gigabit per second connections to Abilene in Seattle to support the demonstrations from Japan, and providing dark fiber in the Phoenix metro area.

For 2003, SCinet provided direct wide area connectivity to Abilene, TeraGrid, ESnet, DREN, and many other networks through peering relationships with these networks. Level(3) Communications delivered three separate 10 Gbps lambdas, utilizing the (3)Link Global Wavelength service to provide a total of more than 30 billion bits per second of bandwidth into Phoenix from a major cross-connect location in Los Angeles; Level(3) also is providing premium IP service for exhibitors and attendees.

Now in its 16th year, the annual SC conference was sponsored by the Institute of Electrical and Electronics Engineers Computer Society and the Association for Computing Machinery’s Special Interest Group on Computer Architecture.
See www.sc-conference.org/sc2003 for more information.

URLs:
SC2003 - www.sc-conference.org/sc2003

SCinet - scinet.supercomp.org/2003/SCinet_2003_Public.html

Bandwidth Challenge results - scinet.supercomp.org/2003/bwc/results/index.html

Qwest Communications - qwest.com

Copyright 1993-2003 HPCwire. Redistribution of this article is forbidden by law without the expressed written consent of the publisher. For a free trial subscription to HPCwire, send e-mail to: trial @ hpcwire.com