November 24, 2004
World Network Speed Record Quadrupled
Caltech, SLAC, Fermilab, CERN, Florida and Partners in the UK, Brazil and Korea Set 101 Gbps Mark During the SC'04 Bandwidth Challenge
PITTSBURGH, PA -- For the second consecutive year, the "High Energy
Physics" team of physicists, computer scientists and network engineers led by
the California Institute of Technology and their partners at the Stanford Linear
Accelerator Center (SLAC), Fermilab, CERN and the University of Florida, as
well as international participants from the UK (University of Manchester, UCL and
UKLight), Brazil (Rio de Janeiro State University, UERJ, and the State
Universities of São Paulo, USP and UNESP) and Korea (Kyungpook National
University, KISTI) joined forces at the Supercomputing 2004 (SC04) Bandwidth
Challenge to capture the Sustained Bandwidth Award. Their demonstration
of "High Speed TeraByte Transfers for Physics" achieved a throughput of 101
gigabits per second (Gbps) to and from the show floor, which exceeds the
previous year's mark of 23.2 Gbps, set by the same team, by a factor of more
than four. The record data transfer speed is equivalent to downloading three full
DVD movies per second, or transmitting all of the content of the Library of
Congress in 15 minutes. It also has been estimated to be approximately 5% of
the total rate of production of new content on Earth during the test.
The new mark, according to Bandwidth Challenge (BWC) sponsor Dr. Wesley
Kaplow, V.P. of Engineering and Operations for Qwest Government Services,
exceeded the sum of all the throughput marks submitted in the present and
previous years by other BWC entrants. The extraordinary achieved bandwidth
was made possible in part through the use of the FAST TCP protocol developed
by Professor Steven Low and his Caltech Netlab team. It was achieved through
the use of seven 10 Gbps links to Cisco 7600 and 6500 series switch routers
provided by Cisco Systems at the Caltech Center for Advanced Computing
(CACR) booth, and three 10 Gbps links to the SLAC/Fermilab booth. The
external network connections included four dedicated wavelengths of National
LambdaRail, between the SC2004 show floor in Pittsburgh and Los Angeles (two
waves), Chicago, and Jacksonville, as well as three 10 Gbps connections across
the Scinet network infrastructure at SC2004 with Qwest-provided wavelengths to
the Internet2 Abilene Network (two 10 Gbps links), the TeraGrid (three 10 Gbps
links) and ESnet. 10 Gigabit Ethernet (10 GbE) interfaces provided by S2io were
used on servers running FAST at the Caltech/CACR booth, and interfaces from
Chelsio equipped with transport offload engines (TOE) running standard TCP
were used at the SLAC/FNAL booth. During the test, the network links over both
the Abilene and National Lambda Rail networks were shown to operate
successfully up to 99 percent of full capacity.
The Bandwidth Challenge allowed the scientists and engineers involved to
preview the globally distributed Grid system that is now being developed in the
US and Europe in preparation for the next generation of high energy physics
experiments at CERN’s Large Hadron Collider (LHC), scheduled to begin
operation in 2007. Physicists at the LHC will search for the Higgs particles
thought to be responsible for mass in the universe, supersymmetry, and other
fundamentally new phenomena bearing on the nature of matter and spacetime,
in an energy range made accessible by the LHC for the first time.
The largest physics collaborations at the LHC, CMS and ATLAS, each
encompass more than 2000 physicists and engineers from 160 universities and
laboratories spread around the globe. In order to fully exploit the potential for
scientific discoveries, many Petabytes of data will have to be processed,
distributed and analyzed. The key to discovery is the analysis phase, where
individual physicists and small groups repeatedly access, and sometimes extract
and transport Terabyte-scale data samples on demand, in order to optimally
select the rare "signals" of new physics from potentially
overwhelming "backgrounds" from already-understood particle interactions. This
data will be drawn from major facilities at CERN in Switzerland, at Fermilab and
the Brookhaven lab in the U.S., and at other laboratories and computing centers
around the world, where the accumulated stored data will amount to many tens
of Petabytes in the early years of LHC operation, rising to the Exabyte range
within the coming decade.
Future optical networks, incorporating multiple 10 Gbps links are the foundation
of the Grid system that will drive the scientific discoveries. A "hybrid" network
integrating both traditionally switching and routing of packets, and dynamically
constructed optical paths to support the largest data flows, is a central part of the
near-term future vision that the scientific community has adopted to meet the
challenges of data intensive science in many fields. By demonstrating that many
10 Gbps wavelengths can be used efficiently over continental and transoceanic
distances (often in both directions simultaneously), the high energy physics team
showed that this vision of a worldwide dynamic Grid supporting many Terabyte
and larger data transactions is practical.
While the SC2004 100+ Gbps demonstration required a major effort by the teams
involved and their sponsors, in partnership with major research and education
network organizations in the U.S., Europe, Latin America and Asia Pacific, it is
expected that networking on this scale in support of the largest science projects
(such as the LHC), will be commonplace within the next three to five years.
The network has been deployed through exceptional support by Cisco Systems,
Hewlett Packard, Newisys, S2io, Chelsio, Sun Microsystems and Boston Ltd., as
well as the staffs of National LambdaRail, Qwest, the Internet2 Abilene Network,
CENIC, ESnet, TeraGrid, AMPATH, RNP and the GIGA project, as well as
ANSP/FAPESP in Brazil, KAIST in Korea, UKERNA in the UK, and the Starlight
international peering point in Chicago. The international connections included
the "LHCNet" OC-192 link between Chicago and CERN at Geneva,
the "CHEPREO" OC-48 link between Abilene ( Atlanta), FIU ( Miami) and São
Paulo, as well as an OC-12 link between Rio de Janeiro, Madrid, Geant, and
Abilene ( New York). The "APII-TransPAC" links to Korea also were used with
good occupancy. The throughputs to and from Latin America and Korea
represented a significant step up in scale, that the team members hope will be
the beginning of a trend towards the widespread use of 10 Gbps-scale network
links on DWDM optical networks interlinking different world regions in support of
science, by the time the LHC begins operation in 2007. The demonstration and
the developments leading up to it, were made possible through the strong
support of the U.S. Department of Energy and the National Science Foundation,
in cooperation with the agencies of the international partners.
As part of the demonstration, a distributed analysis of simulated LHC physics
data was done using the Grid-enabled Analysis Environment (GAE) developed at
Caltech for the LHC and many other major particle physics experiments, as part
of the Particle Physics Data Grid (PPDG), GriPhyN/iVDGL and Open Science
Grid projects. This involved the transfer of data to CERN, Florida, Fermilab,
Caltech, UC San Diego, and Brazil for processing by clusters of computers, and
finally aggregating the results back to the show floor to create a dynamic visual
display of quantities of interest to the physicists. In another part of the
demonstration, file servers at the SLAC/FNAL booth, in London and Manchester
also were used for disk to disk transfers from Pittsburgh to the UK. This gave
physicists valuable experience in the use of the large distributed datasets and
computational resources connected by fast networks, on the scale required at the
start of the LHC physics program.
The team used the MonALISA (MONtoring Agents using a Large Integrated
Services Architecture) system developed at Caltech to monitor and display the
real-time data for all the network links used in the demonstration, as illustrated in
the figure. MonALISA is a highly scalable set of
autonomous self-describing agent-based subsystems which are able to
collaborate and cooperate in performing a wide range of monitoring tasks for
networks and Grid systems, as well as the scientific applications themselves.
Detailed results for the network traffic on all the links used are available
at: http://boson.cacr.caltech.edu:8888/
The team hopes this new demonstration will encourage scientists and engineers
in many sectors of society to develop and plan to deploy a new generation of
revolutionary Internet applications. Multi-gigabit/s end-to-end network
performance will lead to new models for how research and business is
performed. Scientists will be empowered to form "virtual organizations" on a
planetary scale, sharing in a flexible way their collective computing and data
resources. In particular, this is vital for projects on the frontiers of science and
engineering, in "data intensive" fields such as particle physics, astronomy,
bioinformatics, global climate modeling, geosciences, fusion, and neutron
science.
Harvey Newman, Professor of Physics at Caltech and head of the team said,
"This is a breakthrough for the development of global networks and Grids, as well
as inter-regional cooperation in science projects at the high energy frontier. We
demonstrated that multiple links of various bandwidths, up to the 10 Gbps range
can be used effectively over long distances. This is a common theme that will
drive many fields of data intensive science, where the network needs are
foreseen to rise from tens of Gbps to the Terabit/sec range within the next 5-10
years. In a broader sense, this demonstration paves the way for more flexible,
efficient sharing of data and collaborative work by scientists in many countries,
which could be a key factor enabling the next round of physics discoveries at the
high energy frontier. There are also profound implications for how we could
integrate information sharing and on-demand audiovisual collaboration in our
daily lives, with a scale and quality previously unimaginable."
Les Cottrell, assistant director of SLAC's computer services, said: "The smooth
interworking of 10GE interfaces from multiple vendors, the ability to successfully
fill 10Gbits/s paths both on local area networks (LANs), cross country and inter-
continentally, the ability to transmit greater than 10Gbits/s from a single host, and
the ability of TCP Offload Engines to (TOE) to reduce CPU utilization, all illustrate
the emerging maturity of the 10Gigabit/second Ethernet market. The current
limitations are not in the network but rather in the servers at the ends of the links,
and their buses."
Further information about the demonstration may be found at:
http://ultralight.caltech.edu/sc2004 and
http://www-iepm.slac.stanford.edu/monitoring/bulk/sc2004/hiperf.html
About Caltech
With an outstanding faculty, including four Nobel laureates, and
such off-campus facilities as the Jet Propulsion Laboratory, Palomar
Observatory, and the W. M. Keck Observatory, the California Institute of
Technology is one of the world's major research centers. The Institute also
conducts instruction in science and engineering for a student body of
approximately 900 undergraduates and 1,000 graduate students who maintain a
high level of scholarship and intellectual achievement. Caltech's 124-acre
campus is situated in Pasadena, California, a city of 135,000 at the foot of the
San Gabriel Mountains, approximately 30 miles inland from the Pacific Ocean
and 10 miles northeast of the Los Angeles Civic Center. Caltech is an
independent, privately supported university, and is not affiliated with either the
University of California system or the California State Polytechnic universities.
About SLAC
The Stanford Linear Accelerator Center (SLAC) is one of the world's leading
research laboratories. Its mission is to design, construct, and operate
state-of-the-art electron accelerators and related experimental facilities for use in
high-energy physics and synchrotron radiation research. In the course of doing
so, it has established the largest known database in the world, which grows at 1
terabyte per day. That, and its central role in the world of high-energy physics
collaboration, places SLAC at the forefront of the international drive to optimize
the worldwide, high-speed transfer of bulk data.
About CACR
Caltech's Center for Advanced Computing Research (CACR)
performs research and development on leading edge networking and computing
systems, and methods for computational science and engineering. Some current
efforts at CACR include the National Virtual Observatory, TeraGrid, Particle
Physics Data Grid, GriPhyN, iVDGL, Computational Infrastructure for
Geophysics, Geoframework, and Cascade.
About Netlab
Netlab is the Networking Laboratory at Caltech led by Professor
Steven Low, where FAST TCP has been developed. The group does research in
the control and optimization of protocols and networks, and designs, analyzes,
implements, and experiments with new algorithms and systems.
About Fermilab
Fermi National Accelerator Laboratory (Fermilab) is a US
Department of Energy national laboratory located in Batavia, Illinois, outside
Chicago. Fermilab's mission is to advance the understanding of the fundamental
nature of matter and energy through basic scientific research conducted at the
frontiers of high energy physics and related disciplines. The Laboratory hosts the
world's highest energy particle accelerator, the Tevatron. Fermilab-supported
experiments generate petabyte-scale data per year, and involve large,
international collaborations with requirements for high volume data movement to
their home institutions. The Laboratory actively works to remain on the leading
edge of advanced wide area network technology in support of its collaborations.
About CERN
CERN, the European Organization for Nuclear Research, has its
headquarters in Geneva. At present, its member states are Austria, Belgium,
Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece,
Hungary, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain,
Sweden, Switzerland, and the United Kingdom. Israel, Japan, the Russian
Federation, the United States of America, Turkey, the European Commission,
and UNESCO have observer status.
About StarLight
StarLight is an advanced optical infrastructure and proving
ground for network services optimized for high-performance applications.
Operational since summer 2001, StarLight is a 1 GE and 10 GE switch/router
facility for high-performance access to participating networks and also offers true
optical switching for wavelengths. StarLight is being developed by the Electronic
Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC), the
International Center for Advanced Internet Research (iCAIR) at Northwestern
University, and the Mathematics and Computer Science Division at Argonne
National Laboratory, in partnership with Canada's CANARIE and the
Netherlands' SURFnet. STAR TAP and StarLight are made possible by major
funding from the U.S. National Science Foundation to UIC. StarLight is a service
mark of the Board of Trustees of the University of Illinois.
About the University of Manchester
The University of Manchester has been
created by combining the strengths of UMIST (founded in 1824) and the Victoria
University of Manchester (founded in 1851) to form the largest single-site
university in the UK with 34,000 students. On Friday 22nd October 2004 it
received its Royal Charter from Her Majesty the Queen, with an unprecedented
£300m capital investment programme. With a continuing proud tradition of
innovation and excellence, t wenty-three Nobel Prize winners have studied at
Manchester. Rutherford conducted the research which led to the splitting of the
atom there, and the world's first stored-program electronic digital computer
successfully executed its first program there in June 1948. The Schools of
Physics, Computational Science, Computer Science and the Network Group
together with the E-Science North West Centre research facility are very active in
developing a wide range of e-science projects and Grid technologies.
About UERJ (Rio de Janeiro)
Founded in 1950, the Rio de Janeiro State
University (UERJ: www.uerj.br ) ranks among the ten largest universities in
Brazil, with more than 23,000 students. UERJ’s five campuses are home to 22
libraries, 412 classrooms, 50 lecture halls and auditoriums, and 205 laboratories.
UERJ is responsible for important public welfare and health projects through its
centers of medical excellence, the Pedro Ernesto University Hospital (HUPE) and
the Piquet Carneiro Day-care Policlinic Centre, and it is committed to the
preservation of the environment. The UERJ High Energy Physics group includes
15 faculty, postdoctoral and visiting Ph. D physicists and 12 Ph. D. and Masters
students, working on experiments at Fermilab (D0) and CERN (CMS). The group
has constructed a Tier2 center to enable it to take part in the Grid-based data
analysis planned for the LHC, and has originated the concept of a Brazilian “HEP
Grid”, working in cooperation with USP and several other universities in Rio and
São Paulo.
About UNESP (São Paulo)
Created in 1976 with the administrative union of
several isolated Institutes of Higher Education in the State of Sao Paulo, the São
Paulo State University, UNESP, has campuses in 24 different cities in the State
of São Paulo. The university has 25,000 undergraduate students and almost
10,000 graduate students. Since 1999 the university has a group participating in
the DZero Collaboration of Fermilab, which is operating the São Paulo Regional
Analysis Center (SPRACE).
About USP (São Paulo)
The University of São Paulo, USP, is the largest
institution of higher education and research in Brazil, and the third in size in Latin
America. The university has most of its 35 units located on the campus of the
capital of the state. It has around 40,000 undergraduate students and around
25,000 graduate students. It is responsible for almost 25% of all Brazilian papers
and publications indexed on the Institute for Scientific Information (ISI). The
SPRACE cluster is located at the Physics Institute.
About Kyungpook National University (Daegu)
Kyungpook National University is one of leading universities in Korea, especially in physics and
information science. The university has 13 colleges and 9 graduate schools with
24,000 students. It houses the Center for High Energy Physics(CHEP) in which
most Korean high-energy physicists participate. CHEP (chep.knu.ac.kr) was
approved as one of the designated Excellent Research Centers supported by the
Korean Ministry of Science.
About the Particle Physics Data Grid (PPDG)
The Particle Physics Data Grid is developing and deploying production Grid systems
vertically integrating experiment-specific applications, Grid technologies, Grid
and facility computation and storage resources to form effective end-to-end
capabilities. PPDG is a collaboration of computer scientists with a strong record
in Grid technology, and physicists with leading roles in the software and network
infrastructures for major high-energy and nuclear experiments. PPDG’s goals
and plans are guided by the immediate and medium-term needs of the physics
experiments and by the research and development agenda of the computer
science groups.
About GriPhyN and iVDGL
GriPhyN and iVDGL are developing and deploying Grid infrastructure for several
frontier experiments in physics and astronomy. These experiments together will
utilize Petaflops of CPU power and generate hundreds of Petabytes of data that
must be archived, processed, and analyzed by thousands of researchers at
laboratories, universities and small colleges and institutes spread around the
world. The scale and complexity of this "Petascale" science drive GriPhyN's
research program to develop Grid-based architectures, using "virtual data" as a
unifying concept. IVDGL is deploying a Grid laboratory where these technologies
can be tested at large scale and where advanced technologies can be
implemented for extended studies by a variety of disciplines.
About CHEPREO
Florida International University (FIU), in collaboration with
partners at Florida State University, the University of Florida, and the California
Institute of Technology, has been awarded an NSF grant to create and operate
an interregional Grid-enabled Center from High-Energy Physics Research and
Educational Outreach at FIU. CHEPREO
encompasses an integrated program of collaborative physics research on CMS,
network infrastructure development, and educational outreach at one of the
largest minority universities in the US. The center is funded by four NSF
directorates including Mathematical and Physical Sciences, Scientific Computing
Infrastructure, Elementary, Secondary and Informal Education, and International
Programs.
About Open Science Grid
The Open Science Grid aims to build and operate a persistent, coherent national
grid infrastructure for large scale U.S. science, by federating many of the grid
resources currently in use at DOE and NSF-sponsored U.S. labs and
universities. The plan is to iteratively extend and adapt existing grids, such as
Grid2003, to enable the use of common grid infrastructure and shared resources
for the benefit of scientific applications. The Open Science Grid Consortium
includes scientific collaborations, scientific computing centers and existing and
new grid research and deployment projects, involving both computational and
application scientists, working together to provide and support the set of facilities,
services and infrastructure needed.
About Internet2®
Led by more than 200 U.S. universities working with industry
and government, Internet2 develops and deploys advanced network applications
and technologies for research and higher education, accelerating the creation of
tomorrow’s Internet. Internet2 recreates the partnerships among academia,
industry, and government that helped foster today’s Internet in its infancy.
About the Abilene Network
Abilene, developed in partnership with Qwest
Communications, Juniper Networks, Nortel Networks and Indiana University,
provides nationwide high-performance networking capabilities for more than 225
universities and research facilities in all 50 states, the District of Columbia, and
Puerto Rico.
About The TeraGrid
The TeraGrid, funded by the National Science Foundation,
is a multi-year effort to build a distributed national cyberinfrastructure. TeraGrid
entered full production mode in October 2004, providing a coordinated set of
services for the nation's science and engineering community. TeraGrid's unified
user support infrastructure and software environment allow users to access
storage and information resources as well as over a dozen major computing
systems at nine partner sites via a single allocation, either as stand-alone
resources or as components of a distributed application using Grid software
capabilities. Over 40 teraflops of computing power, 1.5 petabytes of online
storage, and multiple visualization, data collection, and instrument resources are
integrated at the nine TeraGrid partner sites. Coordinated by the University of
Chicago and Argonne National Laboratory, the TeraGrid partners include the
National Center for Supercomputing Applications (NCSA) at the University of
Illinois at Urbana-Champaign (UIUC), San Diego Supercomputer Center (SDSC)
at the University of California San Diego (UCSD), the Center for Advanced
Computing Research (CACR) at the California Institute of Technology (Caltech),
the Pittsburgh Supercomputing Center (PSC), Oak Ridge National Laboratory,
Indiana University, Purdue University, and the Texas Advanced Computing
Center (TACC) at the University of Texas-Austin.
About National LambdaRail
National LambdaRail (NLR) is a major initiative of
U.S. research universities and private sector technology companies to provide a
national scale infrastructure for research and experimentation in networking
technologies and applications. NLR puts the control, the power, and the promise
of experimental network infrastructure in the hands of the nation's scientists and
researchers.
About CENIC
CENIC (www.cenic.org) is a not-for-profit corporation serving
California Institute of Technology, California State University, Stanford University,
University of California, University of Southern California, California Community
Colleges, and the statewide K-12 school system. CENIC's mission is to facilitate
and coordinate the development, deployment, and operation of a set of robust
multi-tiered advanced network services for this research and education
community.
About ESnet
The Energy Sciences Network, is a high-
speed network serving thousands of Department of Energy scientists and
collaborators worldwide. A pioneer in providing high-bandwidth, reliable
connections, ESnet enables researchers at national laboratories, universities and
other institutions to communicate with each other using the collaborative
capabilities needed to address some of the world's most important scientific
challenges. Managed and operated by the ESnet staff at Lawrence Berkeley
National Laboratory, ESnet provides direct high-bandwidth connections to all
major DOE sites, multiple cross connections with Internet2/Abilene, connections
to Europe via GEANT, to Japan via SuperSINET, as well as fast interconnections
to more than 100 other networks. Funded principally by DOE’s Office of Science,
ESnet services allow scientists to make effective use of unique DOE research
facilities and computing resources, independent of time and geographic location.
About Qwest
Qwest Communications International Inc. (NYSE: Q) is a leading
provider of voice, video and data services. With more than 40,000 employees,
Qwest is committed to the “Spirit of Service” and providing world-class services
that exceed customers’ expectations for quality, value and reliability.
About UKlight
The UKLight facility (www.uklight.ac.uk) was set up in 2003 with
a grant of £6.5M from HEFCE (the Higher Education Funding Council for
England) to provide an international experimental testbed for optical networking
and support projects working on developments towards optical networks and the
applications that will use them. UKLight will bring together leading-edge
applications, Internet engineering for the future, and optical communications
engineering, and enable UK researchers to join the growing international
consortium which currently spans Europe and North America. A "Point of
Access" (PoA) in London provides international connectivity with 10 Gbit network
connections to peer facilities in Chicago ( StarLight ) and Amsterdam (
NetherLight ). UK research groups gain access to the facility via extensions to
the 10Gbit SuperJANET development network , and a national dark fibre facility
is under development for use by the photonics research community.
Management of the UKLight facility is being undertaken by UKERNA on behalf of
the Joint Information Systems Committee ( JISC).
About AMPATH
Florida International University’s Center for Internet
Augmented Research and Assessment (CIARA) has developed an international,
high-performance research connection point in Miami, Florida, called AMPATH
(AMericasPATH: www.ampath.fiu.edu). AMPATH’s goal is to enable wide-
bandwidth digital communications between US and international research and
education networks, as well as a variety of US research programs in the region.
AMPATH in Miami acts as a major international exchange point (IXP) for the
research and education networks in South America, Central America, Mexico
and the Caribbean and offers connectivity to the Abilene Internet2 network and
StarLight. Since June 2001, the AMPATH project has connected four National
Research and Education Networks in South America: REUNA of Chile, RNP of
Brazil, CNTI of Venezuela and RETINA of Argentina; the Academic Network of
Sao Paolo, ANSP, which is a State-funded network; the University of Puerto
Rico; the Arecibo observatory; and the Gemini-South telescope.
About RedCLARA
RedCLARA is an advanced network designed to
interconnect national research and education networks in Latin America and the
Caribbean (LA&C) region, and to provide access to the global research
community through inter-regional connections. RedCLARA began operations on
September 1, 2004, based on a 155 Mbps backbone ring linking the national
research networks of Argentina, Brazil, Chile, Mexico and Panama, and
connecting them to GÉANT at 622 Mbps via a link between São Paulo, Brazil
and Madrid, Spain. These circuits are being leased from Global Crossing, and
the RedCLARA points of presence are equipped with routers generously donated
by Cisco. By 2005, the national networks of a further 13 countries are expected
to be connected to RedCLARA. RedCLARA is the major product of the ALICE
project, coordinated by DANTE, with 80% of the cost provided by the European
Commission and 20% by the LA&C partners, most of which are associates of the
Latin American Cooperation for Advanced Networking (CLARA), a not-for-profit
association registered in Montevideo, Uruguay.
About RNP
RNP, the National Education and Research Network of Brazil, is a
not-for-profit company which promotes the innovative use of advanced
networking, with the joint support of the Ministry of Science and Technology and
the Ministry of Education. Historically, RNP was responsible for the introduction
and adoption of Internet technology in Brazil. Today, RNP operates a nationally
deployed high-performance network used for collaboration and communication in
research and education throughout the country, reaching all 26 states and the
Federal District, and provides both commodity and advanced research Internet
connectivity to more than 200 universities, research centers and technical
schools.
About CPqD
CPqD is the largest center for telecommunications research and
development in Latin America, applying almost 30% of its annual turnover in
R&D activities, and is one of the the ten leading software producers in Brazil.
CPqD has special competence in the areas of systems for operations and
business support, of highly specialized consulting services and of laboratory
services. CPqD solutions of high technological complexity are in widespread use
in Brazil, as well as in the USA, Europe and Latin America.
About Project GIGA
Project GIGA is a large-scale networking testbed project,
under development in Brazil, which is concerned with optical networking
technologies, and applications and telecommunications services associated with
a high capacity IP network. The testbed network was inaugurated in May, 2004,
and is 735 km in extension, interconnecting 17 universities and research centers
located in seven cities in the states of Rio de Janeiro and São Paulo. Network
capacity is currently based on multiple 2.5 Gbps wavelengths, with expected
future upgrade to 10 Gbps. The project is jointly coordinated by RNP and CPqD,
and is financed by the Brazilian government agency, Financiadora de Estudos e
Projetos (Finep), with resources from the Fund for the Development of
Telecommunications Technology (Funttel).
About KISTI
KISTI (Korea Institute of Science and Technology Information) ,
which was assigned to play the pivotal role in establishing the national science
and technology knowledge information infrastructure, has been founded through
the merger of the Korea Institute of Industry and Technology Information (KINITI)
and the Korea Research and Development Information Center (KORDIC) in
January, 2001. KISTI is under the supervision of the Office of the Prime Minister
and will play a leading role in building the nationwide infrastructure for knowledge
and information by linking the high-performance research network with its
supercomputers.
About APII-TransPAC
Based on the agreement made by the Ministers at the
1st APEC TELMIN meeting held in Seoul, Korea in 1994 to build an advanced
information infrastructure in the Asia Pacific region, the APII Testbed project was
jointly proposed by Korea and Japan at the 12th APEC TEL meeting in 1995.
The APII Testbed project aims to provide the information infrastructure in the
Asia Pacific region that will build a basis for bridging the digital divide, and
conduct joint R&D efforts on application services. TransPAC
is the US-Asia-Pacific Network Consortium proposed by Indiana university to
provide high performance international Internet service connecting the Asia
Pacific Advanced Network to other global networks for the purpose of
international collaborations in research and education.
About Newisys
Newisys®, Inc., a creative technology company significantly
impacting the server-computing environment, is dedicated to designing and
delivering enterprise-class server & storage products. Newisys offers a family of
robust designs targeted for integration into OEM product offerings.
About Sun Microsystems
Since its inception in 1982, a singular vision – "The
Network Is The Computer(TM)"-- has propelled Sun Microsystems, Inc. (Nasdaq:
SUNW) to its position as a leading provider of industrial-strength hardware,
software and services that make the Net work. Sun can be found in more than
100 countries.
About Boston Limited
With over 12 years of experience, Boston Limited
is a UK-based specialist in high end workstation, server and
storage hardware. Boston's solutions bring the latest innovations to market, such
as PCI-Express, DDR II and Infiniband technologies. As the pan-European
distributor for Supermicro, Boston Limited works very closely with key
manufacturing partners as well as strategic clients within the academic and
commercial sectors, to provide cost-effective solutions with exceptional
performance.
About S2io, Inc
Founded in 2001, S2io Inc. has locations in Cupertino,
California and Ottawa, Canada. S2io delivers 10 Gigabit Ethernet hardware &
software solutions that enable OEMs to solve their customers’ high-end
networking problems. The company’s line of products, Xframe®, is based on
S2io-developed technology and include full IPv6 support and
comprehensive stateless offloads for TCP/IP performance without "breaking the
stack". S2io has raised over $42M in funding with its latest C round taking place
in June 2004.
About Chelsio Communications
Chelsio Communications is leading the
convergence of networking, storage and clustering interconnects with its robust,
high-performance and proven protocol acceleration technology. Featuring a
highly scalable and programmable architecture, Chelsio is shipping 10-Gigabit
Ethernet adapter cards with protocol offload, delivering the low latency and
superior throughput required for high-performance computing applications.
Source: California Institute of Technology
http://ultralight.caltech.edu/sc2004/BandwidthRecord/
Contact:
Robert Tindol
California Institute of Technology
tindol @ caltech.edu
Harvey B. Newman
California Institute of Technology
newman @ hep.caltech.edu