February 11, 2000
JUELICH, GERMANY -- The project leader ZAM (Central Institute for Applied
Mathematics) at the Research Center Juelich and GMD (National Research Center
for Information Technology) finalized the DFN (German Research Network)
Gigabit Testbed West in a colloquium in Juelich from January 31 to February 1.
All the different projects presented their results there using a high-speed
WAN-connection. DFN disclosed their activities for a Gigabit research network,
installed in Germany this year.
Actually DFN runs a 155 Mbit/s B-WiN (Research net) throughout Germany. To
test GBit/s transmission rates and all the necessary equipment, two testbeds
have been installed. Testbed West connects the Research Center Juelich, GMD,
St. Augustin near Bonn, Universities of Bonn and Cologne and the DLR (German
Aerospace Center) near Cologne. The project leadership had ZAM, the testbed
was funded with about 9 Mio DM (4.5 Mio US$) for a two and a half year
timeframe. The Gigabit testbed South connects Munich, Garching, Erlangen and
Berlin.
The main targets of the testbed included: testing the new communication
technologies, being a prototype for the real Gbit-net,testing the bandwidth of
the connected computers, and looking for real-life applications.
Project manager Dr. Thomas Eickermann, ZAM, reported the phases, experiences
and results. The testbed had two phases, phase one from August 1997 to January
2000 consisted of installing the network developing tools and giving services
for the applications that used the testbed. From 1999 to June 2000, the
network was extended to Bonn and Cologne. In August 1997 the first 622 Mbit/s
SDH/ATM (Synchronous Digital Hierarchie/Asynchronous Transfer Mode) connection
between Juelich and GMD, 120 km (about 75 miles), in fibre was opened. One
year later the speed was increased to 2.4 Gbit/s. At each end, an ATM switch
ASX 4000 of FORE Sytemes distributes the data on the local ATM and HiPPI nets
of the centers.
In Juelich two Cray T3Es and a T90, and at Sankt Augustin an IBM SP2, a SUN
E5000 and a SGI Onyx 2 visualisation server were connected. After first
damping problems, the connection was very stable. At the end of April 1999 the
connection from GMD to Academy of Media Arts Cologne, DLR and University of
Cologne was established. By the end of 1999 the University of Bonn entered the
testbed. All the institutions had a bandwidth of 622 Mbit/s. First
interoperability problems between hardware and software components of
different vendors have been seen - which could not be solved untill today,
when changing the ATM-signaling to hierarchical PNNI. Measurements of the
throughput showed that now no longer the net is the bottleneck but the
attached end systems computer. A fast interface does not ensure fast
processing, the bottleneck of workstations is the connection of the storage
media and the rate of the supercomputers, stated Peter Wunderling, GMD.
The team detected problems, connecting the supercomputers, Cray and IBM. The
vendors hold out a prospect of ATM interfaces with 622 Mbit/s and 2.5 Gbit/s,
but these cards have not been built. On the other hand is the I/O performance
of these computers not sufficient for such high bandwidths. Alternatively ZAM
and GMD realised, in cooperation with SGI/Cray, a solution based on HiPPI
(High Performance Parallel Interface). As HiPPI cannot be used in WANs, the
centers used PCI-bus based workstations from SUN and SGI as HiPPI-ATM gateways.
The experiences, gained in real applications, underpinned the necessity of
fast networks for metacomputing or distributed handling of applications. I
will report of some of the projects in a second article.
Based on the experiences of the two testbeds, DFN starts to realise a G-WiN
(Gigabit Resarch net) in April this year. The German Ministry of Research
(BMBF) will push the activity with a start finance of about 80 Million DM (40
Mio US$), then, as in the case of B-WiN, the partners and members of DFN have
to pay for the network usage. In a European wide call for tender, the German
Telekom and Systemloesungen GmbH (DeTeSystem), Nuremberg, a Telekom daughter,
won the race for the core network - about 80%. The rest is for other, local or
regional carriers who access the core. Cisco offered 10 core net nodes as core
routers for free.
The core network consists of 29 nodes in two levels, 10 level 1 POPS + 19
regional nodes. The G-WiN project starts in April and will be finished in
autumn or by end of this year. In the first step, a bandwidth of 622 Mbit/s is
planned and will be extended to 2.5 Gbit/s for the backbone between level 1
POPS. The access capacity at the customer in the center will be extended from
622 Mbit/s to 2.5 Gbit/s in 2001 and 10 Gbit/s in 2003. The plans for the
backbone capacity between Level 1 POPS will be a multiple of 2.5 Gbit/s in
2001, up to 10 Gbit/s in 2002 and a multiple of 10 Gbit/s in 2003. The Level 2
POPS are fixed related to the Level 1, for example Hannover Level 1,
Goettingen and Brunswick Level 2.
The members of DFN decided that the costs for the usage will be independent
of the distance. For special services like point-to-point connection or ATM
the users have to pay an extra fee. DFN plans a cost structure for IP best
effort, it starts at about 50 000 DM (25 000 US$) per year for a 2 Mbit/s line
and 40 GByte/month and ends with about 1.5 Million DM (.75 Million US$) per
year for a 622 Mbit/s line and 25 TByte/month. These costs can be reduced,
depending on the experiences, the usage and the participants. The financing is
assured by options, contracts and the requests of the institutions.
With this new G-WiN, Germany plays a leading role in high bandwidth nets in
Europe and compares to the USA, as DFN officials mentioned. Additionally they
will improve the capacity to America.
Uwe Harms is a supercomputing consultant and owner of
Harms-Supercomputing-Consulting in Munich, Germany.
Copyright 1993-1999 HPCwire. Redistribution of this article is forbidden by
law without the expressed written consent of the publisher. For HPCwire
subscription information, send e-mail to sub@hpcwire.com. Tabor Griffin
Communications' HPCwire is also available at
http://tgc.com/hpcwire.html