SC2000 RESEARCH EXHIBITS

RESEARCH EXHIBITS

Resource-intensive computational science, scientific collaboration and leading-edge technologies characterize the SC2000 Research Exhibits, which feature work from the world's research universities, labs and centers. New and innovative research efforts and results best suited to live interchange and demonstration will be exchanged and exhibited, promoting interactions among the presenters and attendees. High performance network links will be used for dynamically presenting research that involves distributed collaborating sites or for linking to remote resources or special facilities. These exhibits are adjacent to the industry exhibits and are staffed during all announced exhibition hours, beginning with the Monday evening Gala Opening event.

Attention Exhibitors: It is now time to order furniture and electricity for your SC2000 Research booth. Please consult the Ordering Instructions for details.

Questions: research@sc2000.org


RESEARCH EXHIBITS CHAIR
VIRGINIA A. TO, HIGH PERFORMANCE TECHNOLOGIES, INC.


SC2000 RESEARCH EXHIBITORS
AIR FORCE RESEARCH LABORATORY
Booth: R1215
www.afrl.af.mil
hannaj@rl.af.mil

The CCCP comprises n Athlon-processor nodes connected via switched Fast Ethernet configured in a star topology. It utilizes readily available commercial components to assess the capabilities of Beowulf-class supercomputers that are not subject to export controls. Optimal performance will be attained by optimizing compilers and hand-tuned libraries. The CCCP aims to provide a parallel Java design environment and development support for the Common HPC Software Support Initiative. It will be used to support the Joint Battlespace Infosphere development project at the Air Force Research Laboratory. The cluster will also provide a test bed for the evaluation and integration of new and emerging COTS (Commercial Off-The-Shelf) PCI-based multiprocessor add-on hardware including multiple embedded processors and reconfigurable FPGA (Field Programmable Gate Array) architectures. The primary goal for the CCCP is the research, development, and application of advanced computing technologies relevant to the Air Force Information Technology mission. This mission includes high performance computing architectural design and simulation, advanced parallel hyperspectral imaging algorithm design and evaluation, parallel surveillance and tracking algorithm development and characterization, and advanced mixed technology (MEMS, MEFS, electro-optical, etc.) system-level design and simulation.

AMES LABORATORY, SCALABLE COMPUTING LAB
Booth: R298
www.scl.ameslab.gov
halstead@ameslab.gov

The Scalable Computing Laboratory in the DOE Ames Laboratory will be showcasing work on assessing and improving communication of real-world parallel applications, on large cluster computer systems. Included in this work is the option of trading latency performance for bandwidth by using data compression. This has particular relevance to cluster computing in which compute cycles are cheap, but intersite communications are limited. In addition, research into improving real-world application performance with lightweight message passing and OS bypass techniques will be presented together with parallel resource management tools.

ARCTIC REGION SUPERCOMPUTING CENTER
Booth: R1086
www.arsc.edu
ljevans@arsc.edu

The Arctic Region Supercomputing Center supports the computational needs of researchers within the Department of Defense High Performance Computing Modernization Program, the University of Alaska Fairbanks, other academic institutions and government agencies by providing high performance computing, visualization and networking resources, programming and technical expertise, and training. Areas of specialty supported by ARSC include ocean modeling, atmospheric sciences, climate/global change, space physics, satellite remote sensing, and civil, environmental and petroleum engineering. ARSC collaborates in a number of partnerships, including a joint effort with the U.S. Army Engineer Research and Development Center and SGI to build and evaluate a 512-processor SGI Origin 3800 single-system image. The Arctic Region Supercomputing Center operates a Cray T3E and a Cray SV-1, with visualization resources including a Pyramid Systems ImmersaDesk and a network of SGI workstations located in a video production/training lab and three additional access labs on campus.

ARGONNE NATIONAL LABORATORY
Booth: R186
www.mcs.anl.gov
freitag@mcs.anl.gov

Researchers at Argonne National Laboratory are developing powerful collaborative tools and technologies that will advance the state of the art in large-scale computing and will make scientists more productive. The exhibit showcases work in the following areas: numerical libraries for large-scale computational applications, parallel programming tools, collaborative tools, scalable superclusters, advanced visualization environments, software infrastructure for the national computational grid, servers enabling problem solving over the Internet, and associated scientific computing applications in such areas as computational chemistry and climate modeling. Closely tied with these projects is an emphasis on collaborations, including the ASCI program and the NCSA PACI Alliance.

LAWRENCE BERKELEY NATIONAL LABORATORY
Booth: R302
www.lbl.gov
rlboucher@lbl.gov

Lawrence Berkeley National Laboratory (LBNL), home to the Department of Energy's National Energy Research Scientific Computing Center (NERSC) and Energy Sciences Network (ESnet), has long been a leader in computing and networking research. Today, the Lab's HPC and networking capabilities and facilities are helping transform DOE's research programs. Berkeley Lab will highlight its expertise and achievements in computational science - NERSC users will present results achieved using CRAY T3E and IBM SP supercomputers.

  • Scientific applications: LBNL, a leader in Adaptive Mesh Refinement, will release the latest AMR software and visualization packages and will demonstrate AMR's capabilities. DeepView, a scalable system for distributed microscopy and informatics, will be shown.
  • Networking: In addition to spotlighting ESnet's capabilities, LBNL will also demonstrate its BRO reactive firewall application, Quality of Service efforts, and the VisaPult visualization framework for distributed volume rendering using a workstation with graphics hardware.
  • Grid technologies: Demonstrations will show the availability of computing resources and software for accessing remote datasets.

BOSTON UNIVERSITY
Booth: R892
scv.bu.edu
glenn@bu.edu

Boston University's research exhibit features its NSF-funded project, MARINER (Mid-level Alliance Resource In the NorthEast Region). MARINER is a partner in the National Computational Science Alliance and extends the university's efforts in advanced scientific computing and networking to organizations throughout the region. Demonstrations of current research and educational projects developed through the Center for Computational Science and the Scientific Computing and Visualization Group will be shown using graphics workstations, posters, and videos in the exhibit booth. We will also be demonstrating distributed computing, collaboration, and visualization software with our Alliance and other partners.

BROOKHAVEN NATIONAL LABORATORY
Booth: R388
www.bnl.gov
peskin@bnl.gov

Brookhaven National Laboratory proposes to exhibit new computational science developments in four research areas: 1) The Center for Data Intensive Computing (CDIC), initiated in 1998, is now fully established and pursuing research in high energy and nuclear physics, biological and environmental studies, hydrodynamics, material science and chemistry, and advanced scientific computing environments, 2) The Relativistic Heavy Ion Collider (RHIC), whose experiments require massive computational facilities for data collection and analysis, has been commissioned and has produced its first collisions, 3) The RIKEN QCD-SP machine, winner of a Gordon Bell Prize in 1998, will be succeeded by a follow-on 10 Teraflop supercomputer, whose architecture will be described, and 4) The Brookhaven Data Visualization group will demonstrate and highlight advances in the three areas above, along with other new applications in the physical and life sciences and in distance learning. A portable version of Brookhaven's unique stereographic visualization facility will be in place for demonstrations.

CALIFORNIA INSTITUTE OF TECHNOLOGY, CENTER FOR ADVANCED COMPUTING RESEARCH (CACR)
Booth: R686
www.cacr.caltech.edu
chip@cacr.caltech.edu

The Center for Advanced Computing Research (CACR) at the California Institute of Technology was established to foster advances in computational science and engineering. Therefore, CACR focuses on enabling breakthroughs in computational science and engineering by: a) following an applications-driven approach to computational science and engineering research, b) conducting multidisciplinary research on leading-edge computing facilities, c) providing a rich, creative intellectual environment which cultivates multidisciplinary collaborations, and d) harnessing new technologies to create innovative large-scale computing environments. To achieve these goals, CACR conducts multidisciplinary application-driven research in computational science and engineering through collaborations with Caltech, JPL faculty and staff, and R&D establishments throughout the world. Results from these collaborations will be featured in the research exhibit.

CORNELL THEORY CENTER
Booth: R1014
www.tc.cornell.edu
cal@tc.cornell.edu

The Cornell Theory Center (CTC) will showcase a number of computational science and engineering applications that are running on our Velocity and Velocity+ high-performance clusters. Among applications featured will be multi-scale materials modeling and LOOPP (Learning, Observing, and Outputting Protein Patterns), software developed and released through CTC's NCRR-funded parallel resource for biomedical scientists. CTC will demonstrate a Domain Monitoring tool developed at the center and the Cornell Multitask Toolbox for MATLAB. SciCentr, a unique informal education site in virtual world space will also be shown. Acquired with the support of members of CTC's Advanced Cluster Computing Consortium (AC3), the Velocity clusters are built from industry-standard components and run the Windows 2000 operating system.

DEPARTMENT OF DEFENSE, HIGH PERFORMANCE COMPUTING MODERNIZATION PROGRAM
Booth: R397
www.wes.hpc.mil/index.htm
stinsod@wes.army.mil

The High Performance Computing Modernization Office (HPCMO) will demonstrate its support to the DoD and the warfighter via an interactive poster slide show, hard copy posters, and some limited demonstrations. We will emphasize how the technology employed by the High Performance Computing Modernization Program "trickles down" to help the warfighter. We will show the Computational Technology Areas supported by the program, where these areas are supported and highlight some of the "Challenge Projects" for the past year. We will also describe the Defense Research and Engineering Network, as well as our software initiative known as CHSSI. Information about the Programming Environment and Training (PET) program will also be provided.

DEPARTMENT OF ENERGY, ACCELERATED STRATEGIC COMPUTING INITIATIVE (ASCI)
Booth: R1124
www.llnl.gov/asci
chapman@lanl.gov

The U.S. Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) was created to develop and deploy the supercomputing environments (hardware and software) required to support three-dimensional, full-system simulations of U.S. nuclear weapon systems. In the absence of underground nuclear tests of those systems, numerical simulation will be the key to ensuring the safety and reliability of the U.S. nuclear deterrent. To meet this challenge, DOE scientists will need computers that achieve a speed of at least 100 Teraflop by 2004. The ASCI program has assembled powerful partnerships with DOE's three national security laboratories (Sandia, Los Alamos, and Lawrence Livermore National Laboratories), some of the world's leading computer manufacturers and several of America's great universities. In this booth, the University of Minnesota's InTENsity PowerWall will be used to demonstrate the accomplishments of the programs by focusing on the overarching theme of the ASCI roadmap. This will include the application accomplishments and the ASCI technology development timelines and challenges.

EAST CAROLINA UNIVERSITY
Booth: R914
www.ecu.edu/si
creaseyw@mail.ecu.edu

East Carolina University's (ECU) exhibit showcases Embracing Visualization Methods, Progressive HPC Applications, and Innovative NGI Initiatives. In collaboration and partnership with the Ohio Supercomputer Center, state of the art visualization applications developed for the leading edge Reconfigurable Advanced Visualization Environment (RAVE) will be demonstrated on an ImmersaDesk. Presentations include a walk-through of a Space-based Low Earth Orbit Hospitality Facility, Bioactive Scorpion Venom Protein Simulation, a 3D simulation prototype of Blackbeard the Pirate's flagship, research results in quantum chemistry, and Monte Carlo simulations in molecular biology utilizing massively parallel processing techniques. Other demonstrations include quality of service techniques for assessing high bandwidth telemedicine applications which utilize real-time high fidelity pediatric heart sound transmission. As an NCSA affiliate, ECU will host the Education Outreach and Training Partnership for Advanced Computational Infrastructure (EOT-PACI) at our booth to provide information about national education projects and outreach initiatives.

EDINBURGH PARALLEL COMPUTING CENTRE
Booth: R597
www.epcc.ed.ac.uk
m.sawyer@epcc.ed.ac.uk

EPCC is one of the leading HPC centers in Europe. It was established ten years ago as a focus for the University of Edinburgh's HPC work, and today has 45 full-time staff members. The center's mission is to promote the uptake of HPC in academia, industry, and commerce. EPCC is involved in providing service, in-depth support and training for UK scientific researchers, and works on technology transfer projects with industry. The center runs a successful visitor program (TRACS) for European researchers, and is involved in international HPC collaborations including the Java Grande Forum and the European Grid Forum. EPCC hosts several HPC platforms, ranging from a 344-processor Cray T3E to an 18-processor Sun Microsystems E6500 and an in-house built 16-processor Beowulf-class cluster.

ELECTROTECHNICAL LABORATORY
Booth: R792
www.etl.go.jp
yoshio@etl.go.jp

Electrotechnical Laboratory (ETL), a Japanese research institute, performs research to produce creative science and technology. Tsukuba Advanced Computing Center (TACC), operated by Agency of Industrial Science and Technology, supports the computational needs of HPC researchers. This exhibit demonstrates the following recent research topics in HPC: 1) Global computing using Ninf: A network enabled server system for global computing and 2) High performance computing with supercomputers including Hitachi SR8000, IBM RS6000/SP, 256-node Alpha Cluster, etc. provided by TACC. TACC plays an important role as a leading computing center for computational grids in the world. The basic concept and status of a global computing infrastructure around the Asia-Pacific area called Asia-Pacific Grid (ApGrid) will be introduced in this exhibit.

EUROPEAN CENTER FOR PARALLELISM OF BARCELONA (CEPBA-UPC)
Booth: R1214
www.cepba.upc.es
torres@cepba.upc.es

The booth will present the developments and results achieved by CEPBA in research and development projects during past years. The main project will be Paraver, a visualization and analysis tool for MPI, OpenMP, and Java programs. Other projects in the booth will be Nanos (cooperation between OpenMP compiler and OS scheduling on multiprogrammed multiprocessors) and Dimemas (a simulator of DIstributed MEmory MAchines that is being successfully used in tuning MPI applications). We intend to show that a careful design of different tools enables their integrated use, supporting methodologies and practices that lead to very high productivity of parallelization activity. The people interested in these topics who come to the booth will be able to see demonstrations of the different projects and will obtain explanations about CEPBA developments and activities.

FERMI NATIONAL ACCELERATOR LABORATORY AND STANFORD LINEAR ACCELERATOR CENTER
Booth: R500
www.fnal.gov
www.slac.stanford.edu
ecs@lightlink.com

The world's physicists probe the laws of nature at Fermilab and SLAC, the nation's principal experimental high-energy physics facilities. Graphical displays representative of the billions of collisions measured by current experiments will introduce strategies for petabyte-scale data storage and access in this joint exhibit. SLAC will demonstrate its 150-terabyte-and-growing Objectivity DB database used by hundreds of physicists. Fermilab will demonstrate systems that will manage and analyze even larger volumes. Progress on the Particle Physics Data Grid's High-Speed File Replication Service will be shown. A demonstration of PingER will show worldwide network monitoring and data from six years. Physics analysis techniques and results will be described using the BaBar and CMS experiments as examples. The Sloan Digital Sky Survey will illustrate growing collaboration with other practitioners of data-dominated science. Fermilab (operated by URA Inc.) and SLAC (operated by Stanford University) are funded by the US Department of Energy.

HIGH PERFORMANCE COMPUTING CENTER STUTTGART (HLRS)
Booth: R1142
www.hlrs.de
mueller@hlrs.de

The High Performance Computing Center Stuttgart (HLRS) is a national HPC center in Germany for research. In addition, together with debis Systemhaus GmbH and Porsche, it has formed a joint company to provide access to supercomputers for research and industry. At SC2000 HLRS will demonstrate its activities in the field of Grid Computing or Metacomputing. Our presentation will show the main building blocks HLRS is working with. Several projects highlight how these blocks are put together. This includes computers, networks, middleware and applications. To show the potential of the resulting framework HLRS will integrate computing resources at Tsukuba Advanced Computing Center (TACC)/Japan, Japan Atomic Energy Research Institute (JAERI)/Japan, Sandia National Labs (SNL)/USA, Pittsburgh Supercomputing Center (PSC)/USA, Manchester Computing Center (MCC)/UK, the National Center for High Performance Computing (NCHC)/Taiwan, and HLRS/Germany. Several applications in different fields will use this framework with results being visualized by our own collaborative visualization tool COVISE.

INDIANA UNIVERSITY, PURDUE UNIVERSITY, AND THE UNIVERSITY OF NOTRE DAME
Booth: R701
www.indiana.edu/~rindiana
dhart@indiana.edu

Research@Indiana is a collaboration among Indiana's three largest research universities-Indiana University, Purdue University, and the University of Notre Dame-to showcase research and development accomplishments in high performance computing, high performance networking and visualization. The display will showcase computer science developments, including cluster technology, collaboration, grid technology, and massive data storage systems, as well as applications in areas such as astronomy, bioinformatics, chemistry, engineering, medicine, and physics. Indiana has become increasingly important as a center of information technology research, development, and commerce. Home to the Abilene and TransPAC NOCs, Indiana's universities are consistently represented in the Top500 list, and Indiana's computer scientists are developing important new software technology. Much as the research activities of Indiana's three R1 universities cover a great diversity of disciplines, so do accomplishments of Indiana-based researchers making use of HPCC applications.

INTERNET2
Booth: R800
www.internet2.edu
ekl@internet2.edu

Internet2 is a consortium of over 170 universities working in partnership with industry and government to develop and deploy advanced network applications and technologies, accelerating the creation of tomorrow's Internet. Internet2 is recreating the partnership among academia, industry, and government that fostered today's Internet in its infancy. The primary goals of Internet2 are to: • Create a leading edge network capability for the national research community • Enable revolutionary Internet applications • Ensure rapid transfer of new network services and applications to the broader Internet community.

JAPAN ATOMIC ENERGY RESEARCH INSTITUTE, CCSE
Booth: R1248
guide.tokai.jaeri.go.jp/ccse
hirayamt@koma.jaeri.go.jp

CCSE of the Japan Atomic Energy Research Institute was established in April 1995 by governmental guidance to promote computational science and engineering among the national and other semi-governmental research organizations. CCSE has the largest, best equipped complex parallel computer system consisting of nine sets of parallel computers of different machine architecture, five of which are vector processors and four are scalar ones. The computers are located at four sites in Japan. With a distinctive feature of complex computer systems, CCSE has put special emphasis on the R&D activities of parallel computing technologies and developed a common technological basis of parallel processing such as: a) distributed parallel computing environment, b) parallel computation algorithms, c) parallel processing tools, and, as a specific application of these technologies, d) the development of techniques for numerical experiments. We report on our activities regarding these issues.

JAPAN MARINE SCIENCE AND TECHNOLOGY CENTER (JAMSTEC)
Booth: R1108
www.jamstec.go.jp
otsukak@jamstec.go.jp

Japan Marine Science and Technology Center (JAMSTEC), a general oceanographic research institution, was established in October 1971. JAMSTEC installed a NEC SX-4 in spring 1996 and an SX-5 in November 1999 for studying global change. These supercomputer systems are indispensable to understand and predict phenomena such as El Niño events, global warming, weather disasters, and tectonic structure around plate boundaries. In 1997, the Earth Simulator project was started as the cooperative project among JAMSTEC, JAERI, and NASDA under the direction of the Science and Technology Agency (STA) of Japan. The Earth Simulator is a distributed memory parallel supercomputer composed of 640 processor nodes, and each node consists of eight vector processors. The total peak performance and main memory capacity are 40 teraflops and 10 terabytes, respectively. The Earth Simulator will be in operation in the first quarter of 2002 in Yokohama. The Earth Simulator is expected to implement a coupled atmosphere-ocean general circulation model with high resolution, which is being developed by the Frontier Research System for Global Change (FRSGC).

JAPAN SCIENCE AND TECHNOLOGY CORPORATION (JST)
Booth: R1109
www.jst.go.jp
minmin@tokyo.jst.go.jp

Japan Science and Technology Corporation (JST), a semi-governmental organization, reports to the Science and Technology Agency. Since 1996, it has been operating the JST Super Computer Complex (SCC). The system is being used for two projects; one is "Electronic Structures Data System" in the material science field and the other is "Gene Finding Tools" in the life science field. In 1998 JST started "Research and Development for Applying Advanced Computational Science and Technology (ACT-JST)," a grant program that promotes research and development using computational science and technology and networks. There are 25 projects ongoing under the program and some of them are running tools on SCC. In the exhibit, some systems that are being developed on ACT-JST program and SCC are shown.

JOHN VON NEUMANN INSTITUTE FOR COMPUTING
Booth: R1142
www.fz-juelich.de/nic
n.attig@fz-juelich.de

The John von Neumann Institute for Computing (NIC), mainly carried out by the Central Institute for Mathematics (ZAM), is one of three national HPC Centers in Germany. Its task is to support and further develop scientific computing in Germany in cooperation with other centers, universities, and research institutes by providing supercomputer resources nationwide, developing computational methods and conducting interdisciplinary research. Using a software system developed within the government-funded UNICORE Plus project, we will showcase the capabilities of uniform access to different supercomputers in Germany. R&D work on SMP-cluster systems, like porting applications, performance analysis, programming models and tools for hierarchical systems, and especially the interaction of shared memory parallelism and distributed memory parallelism, will also be presented. We will also demonstrate recent activities in the performance analysis of parallel programs, the design of parallel algorithms, and the simulation and visualization of complex applications.

KRELL INSTITUTE
Booth: R1136
www.krellinst.org
kilmer@krellinst.org

The Krell Institute booth will showcase research conducted in two fellowships administered by Krell, and will demonstrate computer-based computational science training for K-12 teachers using materials from the Adventures in Supercomputing pilot program. The fellowships on which the Krell booth will focus are first, the Department of Energy's Computational Science Graduate Fellowship (CSGF) and second, the computer science fellowship sponsored by Los Alamos National Laboratory and Lawrence Livermore National Laboratory. The CSGF fellows carry out research in a wide variety of resource intensive computational science areas including turbulent combustion, aeroelastic system simulation, and adaptive algorithms used for sensory perception, planning, and control. Computer science fellows concentrate their research in the high performance computing areas of scalable operating/run-time systems, hierarchical program systems, compiler design, networking research, performance modeling, and component architectures.

LOS ALAMOS NATIONAL LABORATORY
Booth: R298
www.lanl.gov
chapman@acl.lanl.gov

Scientific and technological research and development remain core to Los Alamos. Ongoing research ranges from innovative biological research to predictive modeling of the global climate, wildfire, and transportation systems, and from developing the scientific visualization tools of tomorrow to implementing one of the most powerful computers installed in the world. Los Alamos software teams are responding to the challenges of taming software complexity by building an integrated software infrastructure for scientific simulation development. Demos, posters, and an interactive video presentation will be shown that demonstrate these capabilities.

MAUI HIGH PERFORMANCE COMPUTING CENTER AND THE ALBUQUERQUE HIGH PERFORMANCE COMPUTING CENTER
Booth: R196
www.mhpcc.edu
www.arc.unm.edu
cshirley@mhpcc.edu

The Maui High Performance Computing Center (MHPCC) and the Albuquerque High Performance Computing Center (AHPCC) are national supercomputing centers managed by the University of New Mexico (UNM). Established under a Cooperative Agreement with the Air Force Research Laboratory (AFRL), MHPCC is a leader in scalable computing technologies and uniquely chartered to support the Department of Defense (DoD), government, commercial, and academic communities. AHPCC provides an environment for research and education in advanced high performance computing, interdisciplinary applications, and state-of-the-art communications. MHPCC is a Distributed Center of the DoD High Performance Computing Modernization Program (HPCMP), and both MHPCC and AHPCC are SuperNodes of the National Computational Science Alliance (Alliance). Projects featured at SC2000 include: advanced image enhancement research, new material design, development of Linux clusters and collaborative environments for performance analysis of scientific and engineering applications, and industrial design projects with private industry.

NATIONAL AERONAUTICS AND SPACE ADMINISTRATION
Booth: R695
www.nasa.gov
pelson@mail.arc.nasa.gov

NASA's research exhibit demonstrates how NASA meets its goals using high-performance computing and networking with projects from five field installations. A variety of real-time and interactive demonstrations feature the latest research in: • Computational applications serving NASA's aerospace, earth science, and space science missions • Remote collaboration and use of virtual reality • Software tools for developing, debugging, converting, monitoring, and optimizing code in grid environments • Learning technologies, and • High-end networking. A large collection of workstations, interactive theaters, and virtual reality devices are used to display the research and encourage visitor interaction.

NATIONAL AEROSPACE LABORATORY OF JAPAN AND STA HPC GROUP
Booth: R1208
www.nal.go.jp
nahirose@nal.go.jp

National Aerospace Laboratory (NAL) is a government research institute under Science and Technology Agency (STA) of Japan. NAL has been leading in HPC among various STA organizations which organize the STA-JAPAN HPC Group. In SC2000, NAL joins its efforts with sister organizations to present STA-HPC Group activities. NAL developed and has been operating the Numerical Wind Tunnel (NWT) since 1993. NWT is the main computing engine of the NAL Numerical Simulator (NS) system. Its contribution to Japanese aerospace research and development projects such as NEXST: supersonic civil transport project, and HOPE: unmanned space shuttle between the Space Station and Japan, is significant. NAL also promotes fundamental research in fluid dynamics and computational sciences. In the exhibit, the CFD Research Center and Multidisciplinary Simulation Concept on the next generation NWT will be shown. Present status of WANS: web access to NS system, and UPACS: unified CFD software package, will also be demonstrated connecting NWT and the booth.

NATIONAL CENTER FOR ATMOSPHERIC RESEARCH (NCAR), SCIENTIFIC COMPUTING DIVISION
Booth: R812
www.scd.ucar.edu
don@ucar.edu

NCAR's Scientific Computing Division presents The Virtual Earth System, a science and technology exhibit focused upon our large-scale simulation efforts in a variety of societally important research domains. This presentation uses a combination of virtual reality, digital media, and high-bandwidth networking to explore and display results from a broad spectrum of research efforts including: climate and global change, high-resolution severe weather, wildfires, atmospheric chemistry, terrestrial and solar turbulence, and more. The challenges and demands of these simulation efforts and the subsequent visualization of results provide insight into the importance and difficulty of developing a better understanding of our planet's processes. Progress in the Earth System Grid project, a joint effort with DOE in the area of wide-area large-scale data analysis, will also be demonstrated.

NATIONAL CENTER FOR HIGH-PERFORMANCE COMPUTING (NCHC), TAIWAN
Booth: R806
www.nchc.gov.tw
fplin@nchc.gov.tw

The National Center for High-performance Computing (NCHC) is one of the national laboratories under the National Science Council (NSC) in Taiwan. It is the only research center for high-performance computing applications in Taiwan. Recently, the center was also made the center for the next generation research network of Taiwan. NCHC has conducted various research applications regarding high-performance computing and networking. In the research exhibition, we will use immersive and collaborative virtual reality to showcase the following programs: 1) Plastic extrusion modeling and mold design using numerical wind tunnel, 2) A virtual GIS-based 3D hydro-dynamic model of the Tamshui river, 3) The crashworthiness of the newly developed Yulong vehicle during a frontal impact, and 4) The structure-based drug design model of a transmembrane endothelin receptor and its antagonist. Moreover, we will participate in "Bandwidth-Intensive Network Applications at SC2000" collaboratively with five supercomputing centers from Germany, US, Japan, and the UK to showcase global meta-computing applications.

NATIONAL COMPUTATIONAL SCIENCE ALLIANCE
Booth: R804
www.ncsa.uiuc.edu
kareng@ncsa.uiuc.edu

The National Computational Science Alliance (Alliance) is a partnership of more than 50 institutions working to prototype the next century's advanced computational and information infrastructure. The National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign anchors the Alliance, which is funded by the NSF's Partnerships for Advanced Computational Infrastructure (PACI) program. At SC2000, Alliance teams will demonstrate how the Alliance attacks large-scale problems of science and engineering and drives technology development. They will also show how scientists are using the emerging PACI Grid, a prototype national technology grid, to conduct long-distance collaborative research. The use of cluster technologies in scientific research-specifically clusters of processors running on Intel's new Itanium 64 chip-will also be highlighted. Many applications have been ported to the cluster platform, and performance often matches or exceeds performance on more traditional supercomputing platforms.

NATIONAL COORDINATION OFFICE FOR INFORMATION TECHNOLOGY RESEARCH AND DEVELOPMENT
Booth: R906
www.ccic.gov
vandamme@ccic.gov

The National Coordination Office (NCO) for Information Technology Research and Development (ITRD) exhibit will feature demonstrations and displays about Federal information technology R&D. This year's exhibit will highlight SuperNet applications -- SuperNet is DARPA's gigabit speed Next Generation Internet (NGI) testbed. These multi-agency applications will include:

  • Digital Amphitheater: A demonstration of the protocols and toolsets for multi site (up to 8 sites) collaborations.
  • Digital Earth: A demonstration of open, distributed, scalable, multi-resolution 3D representations of the Earth into which massive quantities of geo-referenced information can be embedded.
  • Land-Speed Record: A demonstration of multiple HDTV streams, designed to break the "land speed record," for maximum data flow rates in the range of 1 GB/second.
  • Distributed Image SpreadSheet (DISS): A demonstration of an interactive visualization and analysis tool for collaborative environments, data mining, and the study and comparison of large volumes of data often in a real-time mode.
  • Uncompressed 1.5 Gbps HDTV: A demonstration of uncompressed two-way HDTV transmission and extension of a Gigabyte System Network (GSN) between Washington, DC and SC2000.
  • Uncompressed HDTV transmission over IP: A demonstration of studio-quality uncompressed HDTV signals transmitted over IP
  • Web100: A demonstration of a suite of network performance diagnostic tools and a network auto-tuning tool that automatically adjusts the network to achieve improved throughput.

Additional information will be available about multi-agency Federal IT R&D efforts, the President's Information Technology Advisory Committee, and the role of the NCO.

NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE (NPACI)
Booth: R904
www.npaci.edu
mgannis@sdsc.edu

NPACI is a NSF-supported consortium of four dozen premier academic, industrial, and research institutions, led by SDSC at UC San Diego. Its mission is to advance science by creating a national computational infrastructure through • Capability computing: Providing compute and information resources of exceptional capability to enable scientific discovery at scales not previously achievable • Discovery environments: Developing and deploying integrated, easy-to-use computational environments to foster scientific discovery in traditional and emerging disciplines • Computational literacy: Extending the excitement, benefits, and opportunities of science to a diverse population. NPACI's exhibit will showcase "Alpha projects" in bioinformatics infrastructure, protein folding, telescience, multicomponent environmental modeling, scalable visualization, biological fluid dynamics, and cellular microphysiology. We will demonstrate new tools and applications being developed by the cooperating partners, present NPACI resources in action through "transparent supercomputing," and show how the partnership's activities, products, and services are meeting real needs of the computational science community.

PROPHESY PROJECT
Booth: R598
www.ece.nwu.edu/EXTERNAL/vtaylorlab
zlan@ece.nwu.edu

Efficient execution of applications requires insight into how the system features impact the performance of the application. For distributed systems, the task of gaining this insight is complicated by the complexity of the system features. This insight generally results from significant experimental analysis and possibly the development of performance models. This research exhibit will present the Prophecy Project, an infrastructure that aids in gaining this needed insight based upon one's experience and that of others. Prophecy consists of three major components: profiling and instrumentation, relational databases and data analysis.

OAK RIDGE NATIONAL LABORATORY
Booth: R587
www.csm.ornl.gov
rileyba@ornl.gov

ORNL highlights scientific discoveries in climate, genomics, materials, molecular biology, and nuclear medicine made possible by advances in mathematical methods and high performance computing. Learn how performance evaluations of early systems are used to develop specialized techniques to optimize applications for terascale systems. See how M3C and C3 tools and Harness/PVM help build fault-tolerant clusters (NT, Linux, or NOW) that can be administered via a web browser. Check out plug-ins for going unplugged and the CCA (Common Component Architecture) - the next best thing to cut and paste for developing large-scale multi-disciplinary simulations. Visit the ORNL booth and explore:

  • Will tomatoes grow in Tennessee in 2099?
  • Can I have a cool car in California?
  • Can I take that shipment through Nevada?
  • How many ways can a virus vibrate?
  • Why do I have this craving for cheese?
  • How can I scale my cluster administrator (without leaving scars)?
  • How fast is fast enough?

OHIO SUPERCOMPUTER CENTER (OSC)
Booth: R1006
www.osc.edu
kkelley@osc.edu

Over the years, OSC has made a place for itself in the world of High Performance Networking and Computing. This year, we plan to showcase how the work we have done has made a difference. Our theme this year is "Expanding the HPC Frontier." One important focus of OSC is our development of a working prototype system for the virtual simulation of temporal bone dissection. This scientific visualization has the potential to strongly impact medical procedures and instruction. Another focus is our work with Internet2 technology. ITEC-Ohio led by OARnet, a division of OSC, and a consortium of Ohio universities and corporate partners, is one of two national testbeds for Internet2 testing and evaluation of emerging Internet technologies. Finally, OSC's collaborations with research institutions such as East Carolina University have opened doors to expanded technological applications. For example, OSC participated in an Internet-supported military exercise where medical doctors from Columbus provided diagnostic and treatment advice for U.S. military personnel injured in battlefields halfway around the globe.

OSAKA UNIVERSITY, CYBERMEDIA CENTER
Booth: R900
www.rd.center.osaka-u.ac.jp
date@rd.center.osaka-u.ac.jp

Analysis of brain data is too time-consuming, in spite of recent processor development, because the analysis is often performed on a single processor workstation or personal computer. Our goal is to seamlessly acquire, analyze, and visualize brain data, which are essential to understanding brain function, on the Internet. For this purpose, we have been developing MEG (magnetoencephalography) data analysis system using emerging grid technologies. MEG is a promising brain functional imaging modality. We will show that our system can dramatically reduce analysis time and has the potential to improve the current situation of brain science. In detail, MEG data analysis is performed using remote high-performance computers. After that, results of analysis are immediately visualized.

PACIFIC NORTHWEST NATIONAL LABORATORY
Booth: R998
www.pnl.gov/cse
nlee.prince@pnl.gov

Pacific Northwest combines computational, mathematical and engineering sciences and a performance computing technology to solve key scientific problems in: • Applied mathematics - complex, multi-spatial-scale problems • Atmospheric science - local, regional, and global models • Computational biology - modeling micro and macro-scale systems with large data sets • Chemistry - management and remediation of legacy nuclear wastes • Computer science - visualization, problem-solving environments, and artificial intelligence products • Engineering - modeling effects of stress, heat, and impact on materials • Reactive chemical transport - coupling atomic scale with continuum models Pacific Northwest scientists are developing modeling and simulation applications in these domains. Together with high-performance tools and libraries, these applications run on massively parallel computer systems to efficiently manage and analyze very large data sets. Pacific Northwest will exhibit ongoing research in these domains.

PITTSBURGH SUPERCOMPUTING CENTER (PSC)
Booth: R508
www.psc.edu
hackworth@psc.edu

The Pittsburgh Supercomputing Center (PSC) is a NSF national terascale supercomputing center also receiving funding from the Department of Energy, the National Institutes of Health and the State of Pennsylvania. PSC is dedicated to providing government, academic and industrial users with access to state-of-the-art high performance computing and communication resources. Our educational mission features an internship program that provides participants with real experience in a high technology environment. Above all, we strive to provide a flexible environment conducive to solving today's largest and most challenging computational science problems. Our research exhibit will demonstrate the capabilities of our resources, which include a Cray T3E/LC512 and other HPC platforms. We will feature a variety of demonstrations designed to showcase PSC research. Particular areas of focus include materials science, high energy physics, weather modeling, and computational biomedical research such as bioinformatics, structural biology, computational pathology and functional MRI.

THE PURDUE UNIVERSITY NETWORK COMPUTING HUBS
Booth: R198
www.ece.purdue.edu/punch
kapadia@purdue.edu

The booth will present and demonstrate computing portal and wide-area network-computing technologies that allow seamless management of high-performance applications, data, and machines distributed across wide-area networks. These technologies allow users to access and run applications (even unmodified commercial applications) from anywhere via standard Web browsers. The exhibit will highlight the manner in which state-of-the-art technologies can be used to build end-to-end solutions for high-performance, ubiquitous supercomputing on Intranets and the Internet. The demonstration will be based on PUNCH, a network computing system that has been operational for five years. PUNCH currently provides computing services to about 800 users across ten countries; 50 engineering software packages developed by 13 universities and six vendors are available. The exhibit will include presentations on several multi-university projects for collaborative engineering and distance education that utilize PUNCH as an enabling technology.

REAL WORLD COMPUTING PARTNERSHIP (RWCP)
Booth: R403
www.rwcp.or.jp
ishikawa@rwcp.or.jp

Real World Computing Partnership (RWCP), funded by the Japanese government, will show: 1) network architectures and 2) system software and applications on seamless parallel and distributed computing environments. Two network architectures will be presented: i) Comet, the clustering technology using IP over OC48c POS (2.5 Gbps) for information grids and ii) RHiNET, a local area system network for high performance parallel computing. The following system software and applications will be shown: i) Score cluster system software, ii) cluster-enabled Omni OpenMP compiler for SMPs and PC clusters, iii) PROMISE programming environment for regular and irregular scientific applications, iv) SPST Programming Tool for heterogeneous parallel and distributed systems, and v) parallel data mining system. To demonstrate the system software and applications developed at RWCP, RWC PC Cluster III, consisting of 128 PC's connected by Myrinet, three Fast Ethernets, and Gigabit Ethernet, will be brought to SC2000. You will realize the impact of the network bandwidth on applications.

RESEARCH ORGANIZATION FOR INFORMATION SCIENCE & TECHNOLOGY (RIST)
Booth: R602
www.tokyo.rist.or.jp
wada@tokyo.rist.or.jp

Research Organization for Information Science & Technology (RIST) was established in 1995 under the umbrella of STA (Science and Technology Agency). Since then, RIST, located in the center of Tokyo, in accordance with STA's guidance, has been making endeavors to advance the frontier of computational science and technology. One of its major missions is to support integrated computational environments focusing on earth science and its related areas. Exhibitions are mainly on GeoFEM (parallel FE solid earth simulation code) and Foo-Jing (a framework for the next generation atmospheric model).

SAITAMA UNIVERSITY
Booth: R992
www.cent.saitama-u.ac.jp
ido@poti.fms.saitama-u.ac.jp

Computational studies using high performance computers and simulation codes are applied in the fields of engineering and material sciences. Three-dimensional visualization is developed for research and education. Educational systems are developed in these visualization environments for undergraduate students and even younger students in the middle school in Saitama.

STANDARD PERFORMANCE EVALUATION CORPORATION (SPEC)
Booth: R1000
www.spec.org/hpg
eigenman@ecn.purdue.edu

The booth will present benchmarking activities of the SPEC High-Performance Group. The exhibit pursues two goals. First, it will present SPEC's high-performance computing benchmarks, SPEChpc, and the new SPEComp suite. These benchmarks are a service to the HPC community, where they can be used for machine procurement, to improve existing computer systems and for research on software and hardware components of HPC systems. Second, the booth will present several research projects that are closely related to the SPEC effort. These projects define performance evaluation methodologies, characterize computational applications and evaluate candidate benchmarks. We will present a number of such efforts from several participating organizations. One particular highlight of this year's exhibit will be the new SPEComp benchmark suite. SPEComp provides new benchmarks written in the emerging parallel programming standard OpenMP, which is now supported on all major HPC platforms.

STANFORD UNIVERSITY, CENTER FOR COMPUTER RESEARCH IN MUSIC AND ACOUSTICS
Booth: R400
www-ccrma.stanford.edu
cc@ccrma.stanford.edu

A research project for evaluating quality of service (QoS) using "Sound Waves on the Internet from Real-time Echoes." New, no compromise, TCP-based computer applications for audio will be demonstrated using a simplified approach for high quality music and sound streaming over IP networks. Previously existing systems for streaming digital audio involve a number of trade-offs. Because of transmission bandwidth limitations and best effort delivery, audio signal compression of one form or another is typical. Buffering of data, which often delays a signal by seconds, safeguards against delivery uncertainties. Audio is an unforgiving test of networking - one data packet arrives too late and we hear it. Compromises of signal quality and interactivity have been necessary to avoid this basic fact. Along with our new professional audio applications we have developed SoundWIRE, a utility which affords an intuitive way of evaluating transaction delay and delay constancy. Its final form is an enhanced "ping" that uses actual sound reflection. A musical tone, such as a guitar pluck, can be created by repeatedly reflecting a digital acoustic signal between two hosts. Using the network delay between these reflections to substitute for a guitar string creates a tone whose stability represents perfectly regular service and whose pitch represents transmission latency. The ear's ability to discern minute differences makes this an unforgiving test of network reliability.

THE AGGREGATE
Booth: R1148
www.aggregate.org
hankd@engr.uky.edu

The Aggregate refers to a collection of researchers and the technologies that they use to make the components of a parallel computer work better together. Since before our first Linux PC work in the PAPERS project, we have been considering all aspects of Compilers, Hardware Architectures, and Operating Systems (KAOS) together, optimizing system performance rather than performance of the individual parts, focusing primarily on improving Linux PC clusters. This year, led by the University of Kentucky, The Aggregate will again present research from several universities. Demonstrations will focus on applications developed using the new technologies created by The Aggregate, especially a CFD (Computational Fluid Dynamics) code that is currently a Gordon Bell Price/Performance award finalist. New technologies being presented include 3DNow!-enhanced libraries, Flat Neighborhood Network (FNN) design tools, and a variety of enhancements on earlier research developments including PAPERS and our video wall library.

THE INSTITUTE OF PHYSICAL AND CHEMICAL RESEARCH (RIKEN)
Booth: R1092
www.riken.go.jp/engn/index.html
atsushi@atlas.riken.go.jp

We are developing the Molecular Dynamics Machine (MDM), a heterogeneous computer system for MD simulations. It accelerates calculation of Coulomb force using two special-purpose hardware systems-MDGRAPE-2 and WINE-2. The full system will be complete at the end of this year, and its peak performance will be about 75 teraflops. We exhibit the building blocks of MDGRAPE-2 and WINE-2. We also present live MD simulations on a 128 gigaflop subset of the MDM system.

UNIVERSIDADE DE SAO PAULO
Booth: R715
www.lsi.usp.br/~spade
kofuji@lsi.usp.br

A prototype of a next generation "global information infrastructure" will be demonstrated, based on high performance, high availability and with high security servers, and small, portable, wearable, and wireless computers. Instead of "Grand Challenge" class applications, these servers are going to store, process, and retrieve infomation to a large number of end-point "terminals." To demonstrate the concept we are going to demonstrate a virtual collaborative secure work environment and several kinds of remote monitors.

UNIVERSITY OF GREENWICH
Booth: R1121
captools.gre.ac.uk
P.Leggett@gre.ac.uk

The Computer Aided Parallelisation Tools (CAPTools) is an interactive toolkit for the semi-automatic parallelization of serial Fortran code into a message passing SPMD style parallel code. The parallel code produced by CAPTools is portable, efficient, and very similar to the original serial, making it easy to understand and allowing for further manual optimisation of the generated code, if required. CAPTools has been developed by the Parallel Processing Research Group (PPRG) at the University of Greenwich, England for over a decade. The group's focus has been on the development of manual parallelization strategies for Computational Mechanics codes (structured and unstructured mesh) with the subsequent embedding of these strategies within CAPTools. CAPTools was exhibited at SC99 and has been installed and used at many sites in the U.S. including NASA, DoD and various regional supercomputing centres.

UNIVERSITY OF HOUSTON
Booth: R1219
www.cs.uh.edu/~chapman
chapman@cs.uh.edu

Many research and development projects currently aim to facilitate the use of computational grids for the execution of supercomputing applications. Such environments promise an improvement in the utilization of existing computational resources, as well as faster time to completion for the individual user's job. However, an efficient utilization of grid resources is currently impeded, from both points of view, by the manual effort involved in resource selection and job submission. These tasks are supported at a high level for certain classes of applications only. Our goal is to permit the majority of grid users to specify the needs of their job in a convenient manner, and to support them in the task of detecting and selecting computational resources that are likely to meet these needs. In this exhibit, we display on-going work taking place in this area at the University of Houston.

UNIVERSITY OF ILLINOIS AT CHICAGO, NATIONAL CENTER FOR DATA MINING/NATIONAL SCALABLE CLUSTER PROJECT/LABORATORY FOR ADVANCED COMPUTING
Booth: R798
www.ncdm.uic.edu
grossman@uic.edu

Project Data Space will link 14 sites across five continents to demonstrate a new infrastructure to handle: 1) the remote access, analysis, and mining of data and 2) the distributed analysis and mining of data. Researchers at the University of Illinois at Chicago will be joined by over 15 participating institutions and organizations and demonstrate a variety of DataSpace tools and applications. The exhibit will feature a high performance, wide-area, data-intensive testbed called the Terabyte Challenge, developed by the National Scalable Cluster Project (NSCP). The NSCP is collaboration led by research groups at the University of Illinois at Chicago and the University of Pennsylvania, and focused on wide area cluster computing.

UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN, CENTER FOR SUPERCOMPUTING RESEARCH AND DEVELOPMENT
Booth: R891
www.csrd.uiuc.edu
scarroll@uiuc.edu

The CSRD booth will feature the PROMIS compiler, a multilingual, retargetable, optimizing, and restructuring, parallelizing compiler. Also featured will be the Scalable Java Server built on nanoThread technology to provide better interrupt handling and scalability.

UNIVERSITY OF MANCHESTER, MANCHESTER COMPUTING
Booth: R1236
www.man.ac.uk
j.m.brooke@man.ac.uk

Manchester Computing is Europe's premier university computing facility supporting world class research and teaching in all disciplines. It is used by the UK academic community and, increasingly, by many overseas higher education institutions. It is also a major node in the Eu-sponsored EUROGRID project and is a member of the eGRID forum. The major components are: CSAR, the flagship HPC service for UK academia and commerce, and MIIMAS, providing on-line access to major datasets for the social and physical sciences. Manchester is a major network operations center on SuperJanet, the UK academic network, and is taking a leading role in establishing connectivity with partners in Internet2 projects. MC provides computing services to the University of Manchester, through the Manchester Research Centre for Computational Science and the Manchester Visualization Centre. It is an international center for HPCN and visual supercomputing, with a recently installed Virtual Reality center specializing in Virtual Medicine. More than 25,000 users from over 150 UK institutions use Manchester Computing. Highlight: Demonstration of a global metacomputer linking the US, Europe, and Japan. We demonstrate real-time data processing of the signal from the Jodrell Bank radio telescope searching for new fast pulsars that can test the fundamental theories of physics.

UNIVERSITY OF TENNESSEE, INNOVATIVE COMPUTING LABORATORY
Booth: R202
icl.cs.utk.edu
swells@cs.utk.edu

The Innovative Computing Laboratory (ICL), which is part of the Computer Science Department at the University of Tennessee, carries out research in high performance computing focused around four main areas: distributed network computing, numerical linear algebra, software repositories, and performance optimization and benchmarking. Under the leadership of Jack Dongarra, this research has resulted in creative and original solutions to many significant problems inherent in high performance computing applications and architectures. Among the widely known ICL projects and technologies to be presented are NetSolve, Automatically Tuned Linear Algebra Software (ATLAS), Repository in a Box (RIB), the Top500 Report, and the Performance counter Application Programmer Interface (PAPI). This ICL research receives government support from the NSF, DOD, DOE and NASA. We also collaborate closely with companies such as Sun, IBM, SGI, and Cray in order to address the demanding high performance computing challenges confronted by the research and engineering community.

UNIVERSITY OF TOKYO, GRADUATE SCHOOL OF FRONTIER SCIENCES, ADVENTURE PROJECT
Booth: R1115
adventure.q.t.u-tokyo.ac.jp/
yoshi@q.t.u-tokyo.ac.jp

ADVENTURE is one of the projects in the Computational Science of developing an advanced general-purpose computational mechanics system, which can analyze a model with 10-100 million DOFs within 1 hour to 1 day using the world's fastest computer with 30-100 TFLOPS in 2002. To efficiently deal with such an ultra-large scale model, massively parallel algorithms are embedded in the pre-, main-, and post-processes. Neural networks, GAs, and VR then enable ideal user-friendliness. A distributed object-oriented design and programming method is employed such that new theories and algorithms can be easily implemented in various heterogeneous computer environments. In the exhibition, an overview of the project and key technologies such as the Hierarchical DDM are described, together with some numerical examples of a full scale 3D model of nuclear pressure vessel with 100 millions DOFs.

UNIVERSITY OF TOKYO, GRAPE
Booth: R713
grape.c.u-tokyo.ac.jp/grape
fukushig@provence.c.u-tokyo.ac.jp

GRAPE (GRAvity piPE) is a special-purpose computer for astronomical N-body problem. These are machines specialized for evaluation of gravitational interaction between particles, and work as backend processors attached to general-purpose computers. The GRAPE computers handle the calculation of the interaction, and the front-end general-purpose computers handle everything else, such as time integration and I/O. Our projects were awarded 1995, 1996 and 1999 Gordon Bell Prizes. We are now developing GRAPE-6, whose peak speed will be over 100 teraflops. Our simulation using a small configuration of the GRAPE-6 system is selected as this year's finalist for the Gordon Bell prize.

UNIVERSITY OF UTAH, CENTER FOR HIGH PERFORMANCE COMPUTING
Booth: R1114
www.chpc.utah.edu
julia@chpc.utah.edu

The Center for High Performance Computing provides large-scale computer resources to facilitate advances in the field of computational science at the University of Utah. The projects supported by CHPC come from a wide array of disciplines requiring large capacity computing resources, both for calculating the solutions of large-scale, 2-D and 3-D problems, and for graphical visualization of the results.

UNIVERSITY OF VIRGINIA, LEGION
Booth: R1098
www.legion.virginia.edu
humphrey@cs.virginia.edu

Legion is an object-based software project designed to be the meta-operating system for computational grids. Legion runs on top of host operating systems such as UNIX and Windows and provides the illusion of a single virtual machine from a collection of heterogeneous underlying physical machines ranging from workstations to supercomputers. From the project's beginning, in late 1993, the Legion Research Group's goal has been a highly useable, efficient and scalable system founded on solid principles. We have been guided by our own work in object-oriented parallel processing, distributed computing, scheduling, and security, as well as by decades of research in distributed computing systems. Our system addresses key issues such as scalability, parallelism, programming ease, faults tolerance, security, and site autonomy. At SC2000, the newest features of Legion will be showcased in a Legion network of over 400 hosts, emphasizing ease of use, security, and performance.

UNIVERSITY OF WISCONSIN-MADISON & UNIVERSITY OF MARYLAND
Booth: R1091
www.cs.wisc.edu/paradyn
www.cs.umd.edu/projects/dyninstAPI
bart@cs.wisc.edu

Paradyn is a scalable tool particularly suited to measure, efficiently and effectively, the performance of long-running programs on large-scale parallel/distributed machines, such as SMPs and (heterogeneous) clusters of workstations. Novel techniques allow instrumentation of a program while it is running-automatically controlling the instrumentation to collect only the information needed to find current problems. Dynamic Instrumentation directly instruments unmodified applications during their execution, greatly reducing the total amount of performance data collected: a machine-independent interface, known as dyninstAPI, is used as the basis of a wide range of research and commercial tools. Decision support provided by the Performance Consultant isolates bottlenecks to specific causes and parts of a program with an automated search and measurement process. Additional research includes extending dynamic instrumentation to work on commodity operating system kernels, on-the-fly code tuning, and managing and exploiting execution data from experiments and revisions throughout the evolutionary development of a parallel application.

UNIFIED PARALLEL C (UPC)
Booth: R1221
hpc.gmu.edu/~upc
tarek@gmu.edu

This research exhibit will describe the underlying concepts of UPC, an explicitly parallel extension of ANSI C designed to provide both good performance and ease of programming for high-end parallel computers. UPC provides a distributed shared-memory-programming model and includes features that allow programmers to specify and exploit memory locality. Such constructs facilitate explicit control of data and work distribution among threads so that remote memory accesses are minimized. Thus, UPC maintains the C language heritage of keeping programmers in control of and close to the hardware. Among the advanced features of UPC are shared and private pointers into the shared and private address spaces, shared and private data, efficient synchronization mechanisms including non-blocking barriers, and support for establishing different memory consistency models. In addition to its original open-source implementation, UPC has gained acceptance from several vendors who are producing exploratory compilers. Additional information can be found at hpc.gmu.edu/~upc.

WASEDA UNIVERSITY
Booth: R991
www.waseda.ac.jp
muraoka@muraoka.info.waseda.ac.jp

This is the third year of exhibiting this project at SC. The Japanese government, through the Japan Society, supports this project for the promotion of science. The project's objective is to develop a parallelizing, restructuring compiler as well as related tools for parallel and heterogeneous distributed computing environments. The project puts equal emphasis on both practical and theoretical sides. As an outcome of the project, we have developed a software system called GCI for distributed computing. The GCI will be used as part of a Japanese version of GRID, which is planned to be interconnected to a global GRID.