Emblem Sub Level Top PUBLICATIONS
Archived Press Releases
Emblem Sub Level Logo Larry Smarr on Future of Grid, Cyberinfrastructure
Emblem Sub Level Bottom
May 23, 2005

Editorial note: In this Q&A, Larry Smarr, director of the California Institute for Telecommunications and Information Technology (Calit2) and NCSA founding director, discusses, among other things, the effects LambdaGrids will have on Grid computing, the timeline for a legitimate cyberinfrastructure in the United States and what he calls the “Third Era” for campus infrastructure. GRIDtoday: First, I’d like to ask how everything is going at Calit2.

LARRY SMARR: Our California Institute for Telecommunications and Information Technology (Calit2, see www.calit2.net) is a partnership between UC-San Diego and UC-Irvine. It is an experiment in “institutional innovation” to see if a persistent infrastructure to support and increase cross-disciplinary teams can be set up across the vertical stovepipes of departments, schools, and campuses within the UC system. Calit2 is nearing occupation of our two new buildings, which have been designed to enhance such collaboration. The Irvine building is already being occupied and UCSD’s will be by September. Altogether, we will house more than 1,000 researchers focused on the future of the Internet and how these technological innovations will transform major disciplines such as medicine, transportation, environment and entertainment. The OptIPuter is just one of many dozens of large-scale cross-disciplinary projects Calit2 is driving.

Gt: How is your work with OptIPuter going?

SMARR: The NSF-funded OptIPuter research project www.optiputer.net is about halfway through its five-year program. We are currently creating a global prototype of the LambdaGrid that was originally envisioned four years ago. There is a 10Gbps dedicated optical circuit, purchased by UIC on the NLR footprint, called the CAVEWave, that connects UCSD in San Diego, the University of Washington in Seattle, and the University of Illinois at Chicago and the StarLight facility on Northwestern University’s campus in Chicago. From there, it goes internationally to a growing number of OptIPuter partners. Recently, NASA researchers have asked the OptIPuter to reach out and include the three NASA science centers at JPL, Ames and Goddard. NASA will link them with each other via StarLight at 10G by early fall. CENIC’s CalREN-XD is enabling a growing Southern California OptIPuter core, linking UCSD to UCI, USC / ISI and SDSU. Each of the Calit2 buildings will have 100 Mpixel tiled displays tied together over 1 Gb pipes and later 10Gb, both linked to the original 100 Mpixel display developed by Jason Leigh and his colleagues at the Electronic Visualization Lab at UIC.

The OptIPuter middleware team, led by Andrew Chien at UCSD, has demonstrated live performance of our advanced Distributed Virtual Computer (DVC) LambdaGrid middleware, while our first production-quality operating systems have been made available by SDSC’s Phil Papadopoulos through the Rocks distribution system for clusters www.rocksclusters.org. NCSA’s visualization and data analytics server, a 3TB RAM SGI Altix with a cluster of SGI Prisms, is being added to the OptIPuter to enable remote visualization and analysis of huge datasets. Both of our driving applications in brain imaging (led by Mark Ellisman) and earth sciences (led by John Orcutt) are beginning to show what a LambdaGrid can do to enhance science. Perhaps most exciting is that major NSF shared infrastructure programs are beginning to use OptIPuter concepts, such as the USArray data center at the UCSD Scripps Institution of Oceanography, which is building a 50 Mpixel Apple tiled display (similar to the 10Mpixel one at Irvine), and the LOOKING ocean observatory system lookingtosea.ucsd.edu, a prototype for the NSF’s Ocean Research Interactive Observatory Networks (ORION).

Gt: As a part of the Future in Review conference this week, you’ll be participating in a panel discussion, titled “Grid Computing Update: The Next 3-5 Years,” with Wolfgang Gentzsch of MCNC and Wayne Clark of Cisco. What do you foresee for Grid in the next three to five years?

SMARR: The most radical shift will be the transformation of the Grid architecture into a LambdaGrid architecture, by adding in the ability to discover, reserve, set-up and use dedicated components of the underlying network, as is being done in the OptIPuter project. Traditional Grids run over the best effort shared Internet, which is quite unpredictable in terms of available bandwidth, latency and jitter, and has a low throughput for file transport (NASA measures something like 50Mbps for earth satellite image retrieval - see, for example, ensight.eos.nasa.gov/active_net_measure.html. The LambdaGrid is a superset of the Grid that conserves the existing middleware, but extends it to include the network elements and transport protocols different from TCP.

Gt: How do you think Grid has advanced in the last three to five years?

SMARR: We are still in the early days of the transition to routine use of the Grid. Perhaps the most important development has been the adoption of Grid concepts by the major computer vendors - all seem to have some form of Grid systems. The recent announcement of Microsoft’s hiring Tony Hey, director of the UK’s e-Science Initiative*, is a very strong indicator of how the Grid is catching on in the corporate world. The other important trend has been the integration of the Grid with Web services, enabling Service-Oriented Science, as discussed by Ian Foster in the May 6 issue of Science magazine.

Gt: One concept getting a lot of attention in the HPC community lately is that of a “cyberinfrastructure” (CI). How are the concepts of CI and Grid computing related? How do they differ?

SMARR: The Grid middleware is an important building block of CI. Essentially, the CI is the networked hardware resources with a comprehensive set of middleware functionalities and services that interconnects the scientific community and all its computing, storage, visualization and instruments. I think we will see the Grid middleware, which originally was based on Globus, expand through open source to have many detailed capabilities tuned to the scientific needs of each community. So, CI is a superset of the Grid and one in which all the science driving projects supported by the federal funding agencies will help define the capabilities needed in the middleware.

Gt: How does a North American CI differ from the e-Science movement in Europe?

SMARR: The e-Science movement in Europe seems to be more organized and developed than the CI effort in America. For instance, the EU has an Enabling Grids for E-Science in Europe (EGEE, public.eu-egee.org) that all emerging e-Science activities can utilize. The EGEE already has 14,000 CPUs distributed over 130 sites. They also explicitly understand that e-Science prototyping is driving e-Commerce and e-Government functionalities. I am encouraged to see the director of the National Science Foundation, Arden Bement, personally driving the completion of the U.S. CI program. I expect the next 12-18 months will see a rapid rise in the U.S. CI efforts.

Gt: How far along are we in establishing a national CI? Is there a timeline in place for when we might see the all of the country’s labs, universities and other research centers linked together via one huge, high-speed network?

SMARR: The CI requires a dedicated optical backplane. The federal agencies are gradually constructing one, although in an uncoordinated fashion. NASA and DOE have both announced they will make use of the National LambdaRail to connect a number of their science centers. For several years, NSF has funded the four 10G “backbone” wavelengths of the TeraGrid, which connect Los Angeles and Chicago, and then go over CENIC and I-WIRE circuits in California and Illinois, respectively. With the Extended TeraGrid, the NLR is used to get to Austin (Texas), Pittsburgh and other sites. So, if you make a map of the U.S. with all these lambdas, I think you can see we have a good start to creating a persistent CI backplane. There are about two dozen state and regional dark fiber networks that are being connected to NLR and to the universities. So, within a year, there will be a very vibrant U.S. CI optical backplane, interconnected to the rapidly growing Global Lambda Integrated Facility, which connects many countries’ research institutions worldwide.

Gt: Another concept that has received a lot of press is enterprise Grid computing - most deployments of which are maintained within each company’s own walls. Given issues of security, trust and other sociological factors, do you think enterprises will ever be able to share resources like we see from the research community?

SMARR: Hard to say. As you know, I was very involved with Andrew Chien and the Entropia team to see if a startup could be successful in supplying cycles from a highly distributed PC Grid. In the end, the tech crash brought most of these companies down, so it still isn’t clear if this will take off commercially. However, I was always most excited about creating public networks, modeled on the Great Internet Mersenne Prime Search and SETI@Home. The May 6 issue of Science magazine also has a major piece on using PC Grids to solve large scale science problems. I also see Condor continuing to be adopted. For instance, we are considering working on integrating CONDOR with the OptIPuter. My dream has been that PC Grids can give us orders of magnitude gains in computing. In fact, I coined the term “megacomputer” to mean a Grid of millions of computers - 1,000 times more than today’s high-end supercomputers. I believe we are getting much closer to that goal.

Gt: Can the full Grid experience be achieved without a real community effort?

SMARR: The reason the Grid movement is radical is mainly sociological, in my view. It requires us to give up personal ownership of our hardware and live off the resources of others - a much more collectivistic mentality than most are willing to entertain. Yet, this is exactly what happened with the electrical power grid. It was after WWII in the U.S. before most factories gave up generating electricity on-site and learned to live off the electrical power grid…

Gt: On a personal note, it’s been five years since you left NCSA and came out to UCSD. Are you enjoying San Diego after spending such a long time in the Midwest? What about professionally - has your time at UCSD / Calit2 lived up to what you envisioned when you made the move?

SMARR: It has been one of the most productive five years of my life. The opportunity to work with hundreds of faculty and students, building new visions like Calit2, the OptIPuter, LOOKING and so on is very rewarding. However, the coming of the LambdaGrid and National LambdaRail has allowed me to stay connected to my colleagues in the Midwest. Hopefully, we will all begin to “live the dream” soon and use High Performance Collaboratories (HiPerCollabs), so I can stay in beautiful La Jolla and have HD and Super HD telepresence experience with my worldwide set of colleagues. It sure would beat being on an airplane every week as I live now… Gt: Finally, I’d like to ask about iGrid 2005, which is coming up in September. Are slots already filling in on the schedule? What topics can attendees expect to hear and learn about at the event?

SMARR: Maxine Brown and Tom DeFanti are the co-chairs of iGrid2005 www.igrid2005.org, which will be hosted in the newly completed Calit2@UCSD building Sept. 26-29. We have received proposals for dozens of experiments involving 21 countries. You will see the most extreme applications of LambdaGrids and ultra-speed networking - multiple HD video streams, digital cinema, linking supercomputers and remote data sets, interactive visualizations, etc. Since a meeting of the GLIF will be held right after iGrid on Sept. 30, the world’s leaders in taking the Grid to the next level will be here, exchanging their best ideas with each other. We are also working closely with Ron Johnson at the University of Washington, which hosts SC|05 two months after iGrid, to reuse the demos. Thus, we have the “networking double header of the century” occurring later this year!

Gt: Is there anything else you would like to add?

SMARR: Yes, the big paradigm break that dedicated optical circuits creates is that LambdaGrids only work if there is a tight end-to-end connectivity. This means our old model of Internet2 providing gateway-to-gateway connectivity and campuses just being responsible for on-campus networking will become supplemented by a new more collaborative model in which the end user will partner with his or her campus infrastructure provider and with WAN providers like Internet2, NLR, state networks like CENIC, etc.

I think this is quickly driving campuses to what I call the Third Era of campus infrastructure. The First Era was that of the campus computer center in the 1960s and 70s. With the coming of the NSF Supercomputer Centers and the NSFnet in 1985, the campuses pretty much got out of the business of providing cycles and moved to providing campus networking - the Second Era. However, the coming of NLR and the OptIPuter is driving campuses to a Third Era, in which campuses provide co-location facilities for massive amounts of rotating storage, hosted cluster computer “Condos” and a campus optical-overlay to the common shared Internet. This way, scientists only need to have the visualization and analysis displays in their labs, out-sourcing the support and servicing of their clusters to the campus and using the central campus disk as a cache to the national and international federated repositories of science data.

I think NSF’s Director Bement said it very well, as quoted in the Chronicle of Higher Education recently: “… campus networks need to be improved. High-speed data lines crossing the nation are the equivalent of six-lane superhighways. But, those massive conduits are reduced to two-lane roads at most college and university campuses. Improving cyberinfrastructure will transform the capabilities of campus-based scientists.”

* see related article “Tony Hey Joins Microsoft” in the General section of this issue for more information.

By Derrick Harris, Editor GRIDtoday Special Feature

Copyright GRIDtoday.