Resource-intensive computational science, scientific collaboration and leading-edge technologies characterize the SC2000 Research Exhibits, which feature work from the world's research universities, labs and centers. New and innovative research efforts and results best suited to live interchange and demonstration will be exchanged and exhibited, promoting interactions among the presenters and attendees. High performance network links will be used for dynamically presenting research that involves distributed collaborating sites or for linking to remote resources or special facilities. These exhibits are adjacent to the industry exhibits and are staffed during all announced exhibition hours, beginning with the Monday evening Gala Opening event.
Attention Exhibitors: It is now time to order furniture and electricity for your SC2000 Research booth. Please consult the Ordering Instructions for details.
RESEARCH EXHIBITS CHAIR
VIRGINIA A. TO, HIGH PERFORMANCE TECHNOLOGIES, INC.
|SC2000 RESEARCH EXHIBITORS|
AIR FORCE RESEARCH LABORATORY
The CCCP comprises n Athlon-processor nodes connected via switched Fast Ethernet configured in a star topology. It utilizes readily available commercial components to assess the capabilities of Beowulf-class supercomputers that are not subject to export controls. Optimal performance will be attained by optimizing compilers and hand-tuned libraries. The CCCP aims to provide a parallel Java design environment and development support for the Common HPC Software Support Initiative. It will be used to support the Joint Battlespace Infosphere development project at the Air Force Research Laboratory. The cluster will also provide a test bed for the evaluation and integration of new and emerging COTS (Commercial Off-The-Shelf) PCI-based multiprocessor add-on hardware including multiple embedded processors and reconfigurable FPGA (Field Programmable Gate Array) architectures. The primary goal for the CCCP is the research, development, and application of advanced computing technologies relevant to the Air Force Information Technology mission. This mission includes high performance computing architectural design and simulation, advanced parallel hyperspectral imaging algorithm design and evaluation, parallel surveillance and tracking algorithm development and characterization, and advanced mixed technology (MEMS, MEFS, electro-optical, etc.) system-level design and simulation.
The Scalable Computing Laboratory in the DOE Ames Laboratory will be showcasing work on assessing and improving communication of real-world parallel applications, on large cluster computer systems. Included in this work is the option of trading latency performance for bandwidth by using data compression. This has particular relevance to cluster computing in which compute cycles are cheap, but intersite communications are limited. In addition, research into improving real-world application performance with lightweight message passing and OS bypass techniques will be presented together with parallel resource management tools.
The Arctic Region Supercomputing Center supports the computational needs of researchers within the Department of Defense High Performance Computing Modernization Program, the University of Alaska Fairbanks, other academic institutions and government agencies by providing high performance computing, visualization and networking resources, programming and technical expertise, and training. Areas of specialty supported by ARSC include ocean modeling, atmospheric sciences, climate/global change, space physics, satellite remote sensing, and civil, environmental and petroleum engineering. ARSC collaborates in a number of partnerships, including a joint effort with the U.S. Army Engineer Research and Development Center and SGI to build and evaluate a 512-processor SGI Origin 3800 single-system image. The Arctic Region Supercomputing Center operates a Cray T3E and a Cray SV-1, with visualization resources including a Pyramid Systems ImmersaDesk and a network of SGI workstations located in a video production/training lab and three additional access labs on campus.
Researchers at Argonne National Laboratory are developing powerful collaborative tools and technologies that will advance the state of the art in large-scale computing and will make scientists more productive. The exhibit showcases work in the following areas: numerical libraries for large-scale computational applications, parallel programming tools, collaborative tools, scalable superclusters, advanced visualization environments, software infrastructure for the national computational grid, servers enabling problem solving over the Internet, and associated scientific computing applications in such areas as computational chemistry and climate modeling. Closely tied with these projects is an emphasis on collaborations, including the ASCI program and the NCSA PACI Alliance.
Lawrence Berkeley National Laboratory (LBNL), home to the Department of Energy's National Energy Research Scientific Computing Center (NERSC) and Energy Sciences Network (ESnet), has long been a leader in computing and networking research. Today, the Lab's HPC and networking capabilities and facilities are helping transform DOE's research programs. Berkeley Lab will highlight its expertise and achievements in computational science - NERSC users will present results achieved using CRAY T3E and IBM SP supercomputers.
Boston University's research exhibit features its NSF-funded project, MARINER (Mid-level Alliance Resource In the NorthEast Region). MARINER is a partner in the National Computational Science Alliance and extends the university's efforts in advanced scientific computing and networking to organizations throughout the region. Demonstrations of current research and educational projects developed through the Center for Computational Science and the Scientific Computing and Visualization Group will be shown using graphics workstations, posters, and videos in the exhibit booth. We will also be demonstrating distributed computing, collaboration, and visualization software with our Alliance and other partners.
Brookhaven National Laboratory proposes to exhibit new computational science developments in four research areas: 1) The Center for Data Intensive Computing (CDIC), initiated in 1998, is now fully established and pursuing research in high energy and nuclear physics, biological and environmental studies, hydrodynamics, material science and chemistry, and advanced scientific computing environments, 2) The Relativistic Heavy Ion Collider (RHIC), whose experiments require massive computational facilities for data collection and analysis, has been commissioned and has produced its first collisions, 3) The RIKEN QCD-SP machine, winner of a Gordon Bell Prize in 1998, will be succeeded by a follow-on 10 Teraflop supercomputer, whose architecture will be described, and 4) The Brookhaven Data Visualization group will demonstrate and highlight advances in the three areas above, along with other new applications in the physical and life sciences and in distance learning. A portable version of Brookhaven's unique stereographic visualization facility will be in place for demonstrations.
The Center for Advanced Computing Research (CACR) at the California Institute of Technology was established to foster advances in computational science and engineering. Therefore, CACR focuses on enabling breakthroughs in computational science and engineering by: a) following an applications-driven approach to computational science and engineering research, b) conducting multidisciplinary research on leading-edge computing facilities, c) providing a rich, creative intellectual environment which cultivates multidisciplinary collaborations, and d) harnessing new technologies to create innovative large-scale computing environments. To achieve these goals, CACR conducts multidisciplinary application-driven research in computational science and engineering through collaborations with Caltech, JPL faculty and staff, and R&D establishments throughout the world. Results from these collaborations will be featured in the research exhibit.
The Cornell Theory Center (CTC) will showcase a number of computational science and engineering applications that are running on our Velocity and Velocity+ high-performance clusters. Among applications featured will be multi-scale materials modeling and LOOPP (Learning, Observing, and Outputting Protein Patterns), software developed and released through CTC's NCRR-funded parallel resource for biomedical scientists. CTC will demonstrate a Domain Monitoring tool developed at the center and the Cornell Multitask Toolbox for MATLAB. SciCentr, a unique informal education site in virtual world space will also be shown. Acquired with the support of members of CTC's Advanced Cluster Computing Consortium (AC3), the Velocity clusters are built from industry-standard components and run the Windows 2000 operating system.
The High Performance Computing Modernization Office (HPCMO) will demonstrate its support to the DoD and the warfighter via an interactive poster slide show, hard copy posters, and some limited demonstrations. We will emphasize how the technology employed by the High Performance Computing Modernization Program "trickles down" to help the warfighter. We will show the Computational Technology Areas supported by the program, where these areas are supported and highlight some of the "Challenge Projects" for the past year. We will also describe the Defense Research and Engineering Network, as well as our software initiative known as CHSSI. Information about the Programming Environment and Training (PET) program will also be provided.
The U.S. Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) was created to develop and deploy the supercomputing environments (hardware and software) required to support three-dimensional, full-system simulations of U.S. nuclear weapon systems. In the absence of underground nuclear tests of those systems, numerical simulation will be the key to ensuring the safety and reliability of the U.S. nuclear deterrent. To meet this challenge, DOE scientists will need computers that achieve a speed of at least 100 Teraflop by 2004. The ASCI program has assembled powerful partnerships with DOE's three national security laboratories (Sandia, Los Alamos, and Lawrence Livermore National Laboratories), some of the world's leading computer manufacturers and several of America's great universities. In this booth, the University of Minnesota's InTENsity PowerWall will be used to demonstrate the accomplishments of the programs by focusing on the overarching theme of the ASCI roadmap. This will include the application accomplishments and the ASCI technology development timelines and challenges.
East Carolina University's (ECU) exhibit showcases Embracing Visualization Methods, Progressive HPC Applications, and Innovative NGI Initiatives. In collaboration and partnership with the Ohio Supercomputer Center, state of the art visualization applications developed for the leading edge Reconfigurable Advanced Visualization Environment (RAVE) will be demonstrated on an ImmersaDesk. Presentations include a walk-through of a Space-based Low Earth Orbit Hospitality Facility, Bioactive Scorpion Venom Protein Simulation, a 3D simulation prototype of Blackbeard the Pirate's flagship, research results in quantum chemistry, and Monte Carlo simulations in molecular biology utilizing massively parallel processing techniques. Other demonstrations include quality of service techniques for assessing high bandwidth telemedicine applications which utilize real-time high fidelity pediatric heart sound transmission. As an NCSA affiliate, ECU will host the Education Outreach and Training Partnership for Advanced Computational Infrastructure (EOT-PACI) at our booth to provide information about national education projects and outreach initiatives.
EPCC is one of the leading HPC centers in Europe. It was established ten years ago as a focus for the University of Edinburgh's HPC work, and today has 45 full-time staff members. The center's mission is to promote the uptake of HPC in academia, industry, and commerce. EPCC is involved in providing service, in-depth support and training for UK scientific researchers, and works on technology transfer projects with industry. The center runs a successful visitor program (TRACS) for European researchers, and is involved in international HPC collaborations including the Java Grande Forum and the European Grid Forum. EPCC hosts several HPC platforms, ranging from a 344-processor Cray T3E to an 18-processor Sun Microsystems E6500 and an in-house built 16-processor Beowulf-class cluster.
Electrotechnical Laboratory (ETL), a Japanese research institute, performs research to produce creative science and technology. Tsukuba Advanced Computing Center (TACC), operated by Agency of Industrial Science and Technology, supports the computational needs of HPC researchers. This exhibit demonstrates the following recent research topics in HPC: 1) Global computing using Ninf: A network enabled server system for global computing and 2) High performance computing with supercomputers including Hitachi SR8000, IBM RS6000/SP, 256-node Alpha Cluster, etc. provided by TACC. TACC plays an important role as a leading computing center for computational grids in the world. The basic concept and status of a global computing infrastructure around the Asia-Pacific area called Asia-Pacific Grid (ApGrid) will be introduced in this exhibit.
The booth will present the developments and results achieved by CEPBA in research and development projects during past years. The main project will be Paraver, a visualization and analysis tool for MPI, OpenMP, and Java programs. Other projects in the booth will be Nanos (cooperation between OpenMP compiler and OS scheduling on multiprogrammed multiprocessors) and Dimemas (a simulator of DIstributed MEmory MAchines that is being successfully used in tuning MPI applications). We intend to show that a careful design of different tools enables their integrated use, supporting methodologies and practices that lead to very high productivity of parallelization activity. The people interested in these topics who come to the booth will be able to see demonstrations of the different projects and will obtain explanations about CEPBA developments and activities.
The world's physicists probe the laws of nature at Fermilab and SLAC, the nation's principal experimental high-energy physics facilities. Graphical displays representative of the billions of collisions measured by current experiments will introduce strategies for petabyte-scale data storage and access in this joint exhibit. SLAC will demonstrate its 150-terabyte-and-growing Objectivity DB database used by hundreds of physicists. Fermilab will demonstrate systems that will manage and analyze even larger volumes. Progress on the Particle Physics Data Grid's High-Speed File Replication Service will be shown. A demonstration of PingER will show worldwide network monitoring and data from six years. Physics analysis techniques and results will be described using the BaBar and CMS experiments as examples. The Sloan Digital Sky Survey will illustrate growing collaboration with other practitioners of data-dominated science. Fermilab (operated by URA Inc.) and SLAC (operated by Stanford University) are funded by the US Department of Energy.
The High Performance Computing Center Stuttgart (HLRS) is a national HPC center in Germany for research. In addition, together with debis Systemhaus GmbH and Porsche, it has formed a joint company to provide access to supercomputers for research and industry. At SC2000 HLRS will demonstrate its activities in the field of Grid Computing or Metacomputing. Our presentation will show the main building blocks HLRS is working with. Several projects highlight how these blocks are put together. This includes computers, networks, middleware and applications. To show the potential of the resulting framework HLRS will integrate computing resources at Tsukuba Advanced Computing Center (TACC)/Japan, Japan Atomic Energy Research Institute (JAERI)/Japan, Sandia National Labs (SNL)/USA, Pittsburgh Supercomputing Center (PSC)/USA, Manchester Computing Center (MCC)/UK, the National Center for High Performance Computing (NCHC)/Taiwan, and HLRS/Germany. Several applications in different fields will use this framework with results being visualized by our own collaborative visualization tool COVISE.
Research@Indiana is a collaboration among Indiana's three largest research universities-Indiana University, Purdue University, and the University of Notre Dame-to showcase research and development accomplishments in high performance computing, high performance networking and visualization. The display will showcase computer science developments, including cluster technology, collaboration, grid technology, and massive data storage systems, as well as applications in areas such as astronomy, bioinformatics, chemistry, engineering, medicine, and physics. Indiana has become increasingly important as a center of information technology research, development, and commerce. Home to the Abilene and TransPAC NOCs, Indiana's universities are consistently represented in the Top500 list, and Indiana's computer scientists are developing important new software technology. Much as the research activities of Indiana's three R1 universities cover a great diversity of disciplines, so do accomplishments of Indiana-based researchers making use of HPCC applications.
Internet2 is a consortium of over 170 universities working in partnership with industry and government to develop and deploy advanced network applications and technologies, accelerating the creation of tomorrow's Internet. Internet2 is recreating the partnership among academia, industry, and government that fostered today's Internet in its infancy. The primary goals of Internet2 are to: Create a leading edge network capability for the national research community Enable revolutionary Internet applications Ensure rapid transfer of new network services and applications to the broader Internet community.
CCSE of the Japan Atomic Energy Research Institute was established in April 1995 by governmental guidance to promote computational science and engineering among the national and other semi-governmental research organizations. CCSE has the largest, best equipped complex parallel computer system consisting of nine sets of parallel computers of different machine architecture, five of which are vector processors and four are scalar ones. The computers are located at four sites in Japan. With a distinctive feature of complex computer systems, CCSE has put special emphasis on the R&D activities of parallel computing technologies and developed a common technological basis of parallel processing such as: a) distributed parallel computing environment, b) parallel computation algorithms, c) parallel processing tools, and, as a specific application of these technologies, d) the development of techniques for numerical experiments. We report on our activities regarding these issues.
Japan Marine Science and Technology Center (JAMSTEC), a general oceanographic research institution, was established in October 1971. JAMSTEC installed a NEC SX-4 in spring 1996 and an SX-5 in November 1999 for studying global change. These supercomputer systems are indispensable to understand and predict phenomena such as El Niño events, global warming, weather disasters, and tectonic structure around plate boundaries. In 1997, the Earth Simulator project was started as the cooperative project among JAMSTEC, JAERI, and NASDA under the direction of the Science and Technology Agency (STA) of Japan. The Earth Simulator is a distributed memory parallel supercomputer composed of 640 processor nodes, and each node consists of eight vector processors. The total peak performance and main memory capacity are 40 teraflops and 10 terabytes, respectively. The Earth Simulator will be in operation in the first quarter of 2002 in Yokohama. The Earth Simulator is expected to implement a coupled atmosphere-ocean general circulation model with high resolution, which is being developed by the Frontier Research System for Global Change (FRSGC).
Japan Science and Technology Corporation (JST), a semi-governmental organization, reports to the Science and Technology Agency. Since 1996, it has been operating the JST Super Computer Complex (SCC). The system is being used for two projects; one is "Electronic Structures Data System" in the material science field and the other is "Gene Finding Tools" in the life science field. In 1998 JST started "Research and Development for Applying Advanced Computational Science and Technology (ACT-JST)," a grant program that promotes research and development using computational science and technology and networks. There are 25 projects ongoing under the program and some of them are running tools on SCC. In the exhibit, some systems that are being developed on ACT-JST program and SCC are shown.
The John von Neumann Institute for Computing (NIC), mainly carried out by the Central Institute for Mathematics (ZAM), is one of three national HPC Centers in Germany. Its task is to support and further develop scientific computing in Germany in cooperation with other centers, universities, and research institutes by providing supercomputer resources nationwide, developing computational methods and conducting interdisciplinary research. Using a software system developed within the government-funded UNICORE Plus project, we will showcase the capabilities of uniform access to different supercomputers in Germany. R&D work on SMP-cluster systems, like porting applications, performance analysis, programming models and tools for hierarchical systems, and especially the interaction of shared memory parallelism and distributed memory parallelism, will also be presented. We will also demonstrate recent activities in the performance analysis of parallel programs, the design of parallel algorithms, and the simulation and visualization of complex applications.
The Krell Institute booth will showcase research conducted in two fellowships administered by Krell, and will demonstrate computer-based computational science training for K-12 teachers using materials from the Adventures in Supercomputing pilot program. The fellowships on which the Krell booth will focus are first, the Department of Energy's Computational Science Graduate Fellowship (CSGF) and second, the computer science fellowship sponsored by Los Alamos National Laboratory and Lawrence Livermore National Laboratory. The CSGF fellows carry out research in a wide variety of resource intensive computational science areas including turbulent combustion, aeroelastic system simulation, and adaptive algorithms used for sensory perception, planning, and control. Computer science fellows concentrate their research in the high performance computing areas of scalable operating/run-time systems, hierarchical program systems, compiler design, networking research, performance modeling, and component architectures.
Scientific and technological research and development remain core to Los Alamos. Ongoing research ranges from innovative biological research to predictive modeling of the global climate, wildfire, and transportation systems, and from developing the scientific visualization tools of tomorrow to implementing one of the most powerful computers installed in the world. Los Alamos software teams are responding to the challenges of taming software complexity by building an integrated software infrastructure for scientific simulation development. Demos, posters, and an interactive video presentation will be shown that demonstrate these capabilities.
The Maui High Performance Computing Center (MHPCC) and the Albuquerque High Performance Computing Center (AHPCC) are national supercomputing centers managed by the University of New Mexico (UNM). Established under a Cooperative Agreement with the Air Force Research Laboratory (AFRL), MHPCC is a leader in scalable computing technologies and uniquely chartered to support the Department of Defense (DoD), government, commercial, and academic communities. AHPCC provides an environment for research and education in advanced high performance computing, interdisciplinary applications, and state-of-the-art communications. MHPCC is a Distributed Center of the DoD High Performance Computing Modernization Program (HPCMP), and both MHPCC and AHPCC are SuperNodes of the National Computational Science Alliance (Alliance). Projects featured at SC2000 include: advanced image enhancement research, new material design, development of Linux clusters and collaborative environments for performance analysis of scientific and engineering applications, and industrial design projects with private industry.
NASA's research exhibit demonstrates how NASA meets its goals using high-performance computing and networking with projects from five field installations. A variety of real-time and interactive demonstrations feature the latest research in: Computational applications serving NASA's aerospace, earth science, and space science missions Remote collaboration and use of virtual reality Software tools for developing, debugging, converting, monitoring, and optimizing code in grid environments Learning technologies, and High-end networking. A large collection of workstations, interactive theaters, and virtual reality devices are used to display the research and encourage visitor interaction.
National Aerospace Laboratory (NAL) is a government research institute under Science and Technology Agency (STA) of Japan. NAL has been leading in HPC among various STA organizations which organize the STA-JAPAN HPC Group. In SC2000, NAL joins its efforts with sister organizations to present STA-HPC Group activities. NAL developed and has been operating the Numerical Wind Tunnel (NWT) since 1993. NWT is the main computing engine of the NAL Numerical Simulator (NS) system. Its contribution to Japanese aerospace research and development projects such as NEXST: supersonic civil transport project, and HOPE: unmanned space shuttle between the Space Station and Japan, is significant. NAL also promotes fundamental research in fluid dynamics and computational sciences. In the exhibit, the CFD Research Center and Multidisciplinary Simulation Concept on the next generation NWT will be shown. Present status of WANS: web access to NS system, and UPACS: unified CFD software package, will also be demonstrated connecting NWT and the booth.
NCAR's Scientific Computing Division presents The Virtual Earth System, a science and technology exhibit focused upon our large-scale simulation efforts in a variety of societally important research domains. This presentation uses a combination of virtual reality, digital media, and high-bandwidth networking to explore and display results from a broad spectrum of research efforts including: climate and global change, high-resolution severe weather, wildfires, atmospheric chemistry, terrestrial and solar turbulence, and more. The challenges and demands of these simulation efforts and the subsequent visualization of results provide insight into the importance and difficulty of developing a better understanding of our planet's processes. Progress in the Earth System Grid project, a joint effort with DOE in the area of wide-area large-scale data analysis, will also be demonstrated.
The National Center for High-performance Computing (NCHC) is one of the national laboratories under the National Science Council (NSC) in Taiwan. It is the only research center for high-performance computing applications in Taiwan. Recently, the center was also made the center for the next generation research network of Taiwan. NCHC has conducted various research applications regarding high-performance computing and networking. In the research exhibition, we will use immersive and collaborative virtual reality to showcase the following programs: 1) Plastic extrusion modeling and mold design using numerical wind tunnel, 2) A virtual GIS-based 3D hydro-dynamic model of the Tamshui river, 3) The crashworthiness of the newly developed Yulong vehicle during a frontal impact, and 4) The structure-based drug design model of a transmembrane endothelin receptor and its antagonist. Moreover, we will participate in "Bandwidth-Intensive Network Applications at SC2000" collaboratively with five supercomputing centers from Germany, US, Japan, and the UK to showcase global meta-computing applications.
The National Computational Science Alliance (Alliance) is a partnership of more than 50 institutions working to prototype the next century's advanced computational and information infrastructure. The National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign anchors the Alliance, which is funded by the NSF's Partnerships for Advanced Computational Infrastructure (PACI) program. At SC2000, Alliance teams will demonstrate how the Alliance attacks large-scale problems of science and engineering and drives technology development. They will also show how scientists are using the emerging PACI Grid, a prototype national technology grid, to conduct long-distance collaborative research. The use of cluster technologies in scientific research-specifically clusters of processors running on Intel's new Itanium 64 chip-will also be highlighted. Many applications have been ported to the cluster platform, and performance often matches or exceeds performance on more traditional supercomputing platforms.
The National Coordination Office (NCO) for Information Technology Research and Development (ITRD) exhibit will feature demonstrations and displays about Federal information technology R&D. This year's exhibit will highlight SuperNet applications -- SuperNet is DARPA's gigabit speed Next Generation Internet (NGI) testbed. These multi-agency applications will include:
Additional information will be available about multi-agency Federal IT R&D efforts, the President's Information Technology Advisory Committee, and the role of the NCO.
NPACI is a NSF-supported consortium of four dozen premier academic, industrial, and research institutions, led by SDSC at UC San Diego. Its mission is to advance science by creating a national computational infrastructure through Capability computing: Providing compute and information resources of exceptional capability to enable scientific discovery at scales not previously achievable Discovery environments: Developing and deploying integrated, easy-to-use computational environments to foster scientific discovery in traditional and emerging disciplines Computational literacy: Extending the excitement, benefits, and opportunities of science to a diverse population. NPACI's exhibit will showcase "Alpha projects" in bioinformatics infrastructure, protein folding, telescience, multicomponent environmental modeling, scalable visualization, biological fluid dynamics, and cellular microphysiology. We will demonstrate new tools and applications being developed by the cooperating partners, present NPACI resources in action through "transparent supercomputing," and show how the partnership's activities, products, and services are meeting real needs of the computational science community.
Efficient execution of applications requires insight into how the system features impact the performance of the application. For distributed systems, the task of gaining this insight is complicated by the complexity of the system features. This insight generally results from significant experimental analysis and possibly the development of performance models. This research exhibit will present the Prophecy Project, an infrastructure that aids in gaining this needed insight based upon one's experience and that of others. Prophecy consists of three major components: profiling and instrumentation, relational databases and data analysis.
ORNL highlights scientific discoveries in climate, genomics, materials, molecular biology, and nuclear medicine made possible by advances in mathematical methods and high performance computing. Learn how performance evaluations of early systems are used to develop specialized techniques to optimize applications for terascale systems. See how M3C and C3 tools and Harness/PVM help build fault-tolerant clusters (NT, Linux, or NOW) that can be administered via a web browser. Check out plug-ins for going unplugged and the CCA (Common Component Architecture) - the next best thing to cut and paste for developing large-scale multi-disciplinary simulations. Visit the ORNL booth and explore:
Over the years, OSC has made a place for itself in the world of High Performance Networking and Computing. This year, we plan to showcase how the work we have done has made a difference. Our theme this year is "Expanding the HPC Frontier." One important focus of OSC is our development of a working prototype system for the virtual simulation of temporal bone dissection. This scientific visualization has the potential to strongly impact medical procedures and instruction. Another focus is our work with Internet2 technology. ITEC-Ohio led by OARnet, a division of OSC, and a consortium of Ohio universities and corporate partners, is one of two national testbeds for Internet2 testing and evaluation of emerging Internet technologies. Finally, OSC's collaborations with research institutions such as East Carolina University have opened doors to expanded technological applications. For example, OSC participated in an Internet-supported military exercise where medical doctors from Columbus provided diagnostic and treatment advice for U.S. military personnel injured in battlefields halfway around the globe.
Analysis of brain data is too time-consuming, in spite of recent processor development, because the analysis is often performed on a single processor workstation or personal computer. Our goal is to seamlessly acquire, analyze, and visualize brain data, which are essential to understanding brain function, on the Internet. For this purpose, we have been developing MEG (magnetoencephalography) data analysis system using emerging grid technologies. MEG is a promising brain functional imaging modality. We will show that our system can dramatically reduce analysis time and has the potential to improve the current situation of brain science. In detail, MEG data analysis is performed using remote high-performance computers. After that, results of analysis are immediately visualized.
Pacific Northwest combines computational, mathematical and engineering sciences and a performance computing technology to solve key scientific problems in: Applied mathematics - complex, multi-spatial-scale problems Atmospheric science - local, regional, and global models Computational biology - modeling micro and macro-scale systems with large data sets Chemistry - management and remediation of legacy nuclear wastes Computer science - visualization, problem-solving environments, and artificial intelligence products Engineering - modeling effects of stress, heat, and impact on materials Reactive chemical transport - coupling atomic scale with continuum models Pacific Northwest scientists are developing modeling and simulation applications in these domains. Together with high-performance tools and libraries, these applications run on massively parallel computer systems to efficiently manage and analyze very large data sets. Pacific Northwest will exhibit ongoing research in these domains.
The Pittsburgh Supercomputing Center (PSC) is a NSF national terascale supercomputing center also receiving funding from the Department of Energy, the National Institutes of Health and the State of Pennsylvania. PSC is dedicated to providing government, academic and industrial users with access to state-of-the-art high performance computing and communication resources. Our educational mission features an internship program that provides participants with real experience in a high technology environment. Above all, we strive to provide a flexible environment conducive to solving today's largest and most challenging computational science problems. Our research exhibit will demonstrate the capabilities of our resources, which include a Cray T3E/LC512 and other HPC platforms. We will feature a variety of demonstrations designed to showcase PSC research. Particular areas of focus include materials science, high energy physics, weather modeling, and computational biomedical research such as bioinformatics, structural biology, computational pathology and functional MRI.
The booth will present and demonstrate computing portal and wide-area network-computing technologies that allow seamless management of high-performance applications, data, and machines distributed across wide-area networks. These technologies allow users to access and run applications (even unmodified commercial applications) from anywhere via standard Web browsers. The exhibit will highlight the manner in which state-of-the-art technologies can be used to build end-to-end solutions for high-performance, ubiquitous supercomputing on Intranets and the Internet. The demonstration will be based on PUNCH, a network computing system that has been operational for five years. PUNCH currently provides computing services to about 800 users across ten countries; 50 engineering software packages developed by 13 universities and six vendors are available. The exhibit will include presentations on several multi-university projects for collaborative engineering and distance education that utilize PUNCH as an enabling technology.
Real World Computing Partnership (RWCP), funded by the Japanese government, will show: 1) network architectures and 2) system software and applications on seamless parallel and distributed computing environments. Two network architectures will be presented: i) Comet, the clustering technology using IP over OC48c POS (2.5 Gbps) for information grids and ii) RHiNET, a local area system network for high performance parallel computing. The following system software and applications will be shown: i) Score cluster system software, ii) cluster-enabled Omni OpenMP compiler for SMPs and PC clusters, iii) PROMISE programming environment for regular and irregular scientific applications, iv) SPST Programming Tool for heterogeneous parallel and distributed systems, and v) parallel data mining system. To demonstrate the system software and applications developed at RWCP, RWC PC Cluster III, consisting of 128 PC's connected by Myrinet, three Fast Ethernets, and Gigabit Ethernet, will be brought to SC2000. You will realize the impact of the network bandwidth on applications.
Research Organization for Information Science & Technology (RIST) was established in 1995 under the umbrella of STA (Science and Technology Agency). Since then, RIST, located in the center of Tokyo, in accordance with STA's guidance, has been making endeavors to advance the frontier of computational science and technology. One of its major missions is to support integrated computational environments focusing on earth science and its related areas. Exhibitions are mainly on GeoFEM (parallel FE solid earth simulation code) and Foo-Jing (a framework for the next generation atmospheric model).
Computational studies using high performance computers and simulation codes are applied in the fields of engineering and material sciences. Three-dimensional visualization is developed for research and education. Educational systems are developed in these visualization environments for undergraduate students and even younger students in the middle school in Saitama.
The booth will present benchmarking activities of the SPEC High-Performance Group. The exhibit pursues two goals. First, it will present SPEC's high-performance computing benchmarks, SPEChpc, and the new SPEComp suite. These benchmarks are a service to the HPC community, where they can be used for machine procurement, to improve existing computer systems and for research on software and hardware components of HPC systems. Second, the booth will present several research projects that are closely related to the SPEC effort. These projects define performance evaluation methodologies, characterize computational applications and evaluate candidate benchmarks. We will present a number of such efforts from several participating organizations. One particular highlight of this year's exhibit will be the new SPEComp benchmark suite. SPEComp provides new benchmarks written in the emerging parallel programming standard OpenMP, which is now supported on all major HPC platforms.
A research project for evaluating quality of service (QoS) using "Sound Waves on the Internet from Real-time Echoes." New, no compromise, TCP-based computer applications for audio will be demonstrated using a simplified approach for high quality music and sound streaming over IP networks. Previously existing systems for streaming digital audio involve a number of trade-offs. Because of transmission bandwidth limitations and best effort delivery, audio signal compression of one form or another is typical. Buffering of data, which often delays a signal by seconds, safeguards against delivery uncertainties. Audio is an unforgiving test of networking - one data packet arrives too late and we hear it. Compromises of signal quality and interactivity have been necessary to avoid this basic fact. Along with our new professional audio applications we have developed SoundWIRE, a utility which affords an intuitive way of evaluating transaction delay and delay constancy. Its final form is an enhanced "ping" that uses actual sound reflection. A musical tone, such as a guitar pluck, can be created by repeatedly reflecting a digital acoustic signal between two hosts. Using the network delay between these reflections to substitute for a guitar string creates a tone whose stability represents perfectly regular service and whose pitch represents transmission latency. The ear's ability to discern minute differences makes this an unforgiving test of network reliability.
The Aggregate refers to a collection of researchers and the technologies that they use to make the components of a parallel computer work better together. Since before our first Linux PC work in the PAPERS project, we have been considering all aspects of Compilers, Hardware Architectures, and Operating Systems (KAOS) together, optimizing system performance rather than performance of the individual parts, focusing primarily on improving Linux PC clusters. This year, led by the University of Kentucky, The Aggregate will again present research from several universities. Demonstrations will focus on applications developed using the new technologies created by The Aggregate, especially a CFD (Computational Fluid Dynamics) code that is currently a Gordon Bell Price/Performance award finalist. New technologies being presented include 3DNow!-enhanced libraries, Flat Neighborhood Network (FNN) design tools, and a variety of enhancements on earlier research developments including PAPERS and our video wall library.
We are developing the Molecular Dynamics Machine (MDM), a heterogeneous computer system for MD simulations. It accelerates calculation of Coulomb force using two special-purpose hardware systems-MDGRAPE-2 and WINE-2. The full system will be complete at the end of this year, and its peak performance will be about 75 teraflops. We exhibit the building blocks of MDGRAPE-2 and WINE-2. We also present live MD simulations on a 128 gigaflop subset of the MDM system.
A prototype of a next generation "global information infrastructure" will be demonstrated, based on high performance, high availability and with high security servers, and small, portable, wearable, and wireless computers. Instead of "Grand Challenge" class applications, these servers are going to store, process, and retrieve infomation to a large number of end-point "terminals." To demonstrate the concept we are going to demonstrate a virtual collaborative secure work environment and several kinds of remote monitors.
The Computer Aided Parallelisation Tools (CAPTools) is an interactive toolkit for the semi-automatic parallelization of serial Fortran code into a message passing SPMD style parallel code. The parallel code produced by CAPTools is portable, efficient, and very similar to the original serial, making it easy to understand and allowing for further manual optimisation of the generated code, if required. CAPTools has been developed by the Parallel Processing Research Group (PPRG) at the University of Greenwich, England for over a decade. The group's focus has been on the development of manual parallelization strategies for Computational Mechanics codes (structured and unstructured mesh) with the subsequent embedding of these strategies within CAPTools. CAPTools was exhibited at SC99 and has been installed and used at many sites in the U.S. including NASA, DoD and various regional supercomputing centres.
Many research and development projects currently aim to facilitate the use of computational grids for the execution of supercomputing applications. Such environments promise an improvement in the utilization of existing computational resources, as well as faster time to completion for the individual user's job. However, an efficient utilization of grid resources is currently impeded, from both points of view, by the manual effort involved in resource selection and job submission. These tasks are supported at a high level for certain classes of applications only. Our goal is to permit the majority of grid users to specify the needs of their job in a convenient manner, and to support them in the task of detecting and selecting computational resources that are likely to meet these needs. In this exhibit, we display on-going work taking place in this area at the University of Houston.
Project Data Space will link 14 sites across five continents to demonstrate a new infrastructure to handle: 1) the remote access, analysis, and mining of data and 2) the distributed analysis and mining of data. Researchers at the University of Illinois at Chicago will be joined by over 15 participating institutions and organizations and demonstrate a variety of DataSpace tools and applications. The exhibit will feature a high performance, wide-area, data-intensive testbed called the Terabyte Challenge, developed by the National Scalable Cluster Project (NSCP). The NSCP is collaboration led by research groups at the University of Illinois at Chicago and the University of Pennsylvania, and focused on wide area cluster computing.
The CSRD booth will feature the PROMIS compiler, a multilingual, retargetable, optimizing, and restructuring, parallelizing compiler. Also featured will be the Scalable Java Server built on nanoThread technology to provide better interrupt handling and scalability.
Manchester Computing is Europe's premier university computing facility supporting world class research and teaching in all disciplines. It is used by the UK academic community and, increasingly, by many overseas higher education institutions. It is also a major node in the Eu-sponsored EUROGRID project and is a member of the eGRID forum. The major components are: CSAR, the flagship HPC service for UK academia and commerce, and MIIMAS, providing on-line access to major datasets for the social and physical sciences. Manchester is a major network operations center on SuperJanet, the UK academic network, and is taking a leading role in establishing connectivity with partners in Internet2 projects. MC provides computing services to the University of Manchester, through the Manchester Research Centre for Computational Science and the Manchester Visualization Centre. It is an international center for HPCN and visual supercomputing, with a recently installed Virtual Reality center specializing in Virtual Medicine. More than 25,000 users from over 150 UK institutions use Manchester Computing. Highlight: Demonstration of a global metacomputer linking the US, Europe, and Japan. We demonstrate real-time data processing of the signal from the Jodrell Bank radio telescope searching for new fast pulsars that can test the fundamental theories of physics.
The Innovative Computing Laboratory (ICL), which is part of the Computer Science Department at the University of Tennessee, carries out research in high performance computing focused around four main areas: distributed network computing, numerical linear algebra, software repositories, and performance optimization and benchmarking. Under the leadership of Jack Dongarra, this research has resulted in creative and original solutions to many significant problems inherent in high performance computing applications and architectures. Among the widely known ICL projects and technologies to be presented are NetSolve, Automatically Tuned Linear Algebra Software (ATLAS), Repository in a Box (RIB), the Top500 Report, and the Performance counter Application Programmer Interface (PAPI). This ICL research receives government support from the NSF, DOD, DOE and NASA. We also collaborate closely with companies such as Sun, IBM, SGI, and Cray in order to address the demanding high performance computing challenges confronted by the research and engineering community.
ADVENTURE is one of the projects in the Computational Science of developing an advanced general-purpose computational mechanics system, which can analyze a model with 10-100 million DOFs within 1 hour to 1 day using the world's fastest computer with 30-100 TFLOPS in 2002. To efficiently deal with such an ultra-large scale model, massively parallel algorithms are embedded in the pre-, main-, and post-processes. Neural networks, GAs, and VR then enable ideal user-friendliness. A distributed object-oriented design and programming method is employed such that new theories and algorithms can be easily implemented in various heterogeneous computer environments. In the exhibition, an overview of the project and key technologies such as the Hierarchical DDM are described, together with some numerical examples of a full scale 3D model of nuclear pressure vessel with 100 millions DOFs.
GRAPE (GRAvity piPE) is a special-purpose computer for astronomical N-body problem. These are machines specialized for evaluation of gravitational interaction between particles, and work as backend processors attached to general-purpose computers. The GRAPE computers handle the calculation of the interaction, and the front-end general-purpose computers handle everything else, such as time integration and I/O. Our projects were awarded 1995, 1996 and 1999 Gordon Bell Prizes. We are now developing GRAPE-6, whose peak speed will be over 100 teraflops. Our simulation using a small configuration of the GRAPE-6 system is selected as this year's finalist for the Gordon Bell prize.
The Center for High Performance Computing provides large-scale computer resources to facilitate advances in the field of computational science at the University of Utah. The projects supported by CHPC come from a wide array of disciplines requiring large capacity computing resources, both for calculating the solutions of large-scale, 2-D and 3-D problems, and for graphical visualization of the results.
Legion is an object-based software project designed to be the meta-operating system for computational grids. Legion runs on top of host operating systems such as UNIX and Windows and provides the illusion of a single virtual machine from a collection of heterogeneous underlying physical machines ranging from workstations to supercomputers. From the project's beginning, in late 1993, the Legion Research Group's goal has been a highly useable, efficient and scalable system founded on solid principles. We have been guided by our own work in object-oriented parallel processing, distributed computing, scheduling, and security, as well as by decades of research in distributed computing systems. Our system addresses key issues such as scalability, parallelism, programming ease, faults tolerance, security, and site autonomy. At SC2000, the newest features of Legion will be showcased in a Legion network of over 400 hosts, emphasizing ease of use, security, and performance.
Paradyn is a scalable tool particularly suited to measure, efficiently and effectively, the performance of long-running programs on large-scale parallel/distributed machines, such as SMPs and (heterogeneous) clusters of workstations. Novel techniques allow instrumentation of a program while it is running-automatically controlling the instrumentation to collect only the information needed to find current problems. Dynamic Instrumentation directly instruments unmodified applications during their execution, greatly reducing the total amount of performance data collected: a machine-independent interface, known as dyninstAPI, is used as the basis of a wide range of research and commercial tools. Decision support provided by the Performance Consultant isolates bottlenecks to specific causes and parts of a program with an automated search and measurement process. Additional research includes extending dynamic instrumentation to work on commodity operating system kernels, on-the-fly code tuning, and managing and exploiting execution data from experiments and revisions throughout the evolutionary development of a parallel application.
This research exhibit will describe the underlying concepts of UPC, an explicitly parallel extension of ANSI C designed to provide both good performance and ease of programming for high-end parallel computers. UPC provides a distributed shared-memory-programming model and includes features that allow programmers to specify and exploit memory locality. Such constructs facilitate explicit control of data and work distribution among threads so that remote memory accesses are minimized. Thus, UPC maintains the C language heritage of keeping programmers in control of and close to the hardware. Among the advanced features of UPC are shared and private pointers into the shared and private address spaces, shared and private data, efficient synchronization mechanisms including non-blocking barriers, and support for establishing different memory consistency models. In addition to its original open-source implementation, UPC has gained acceptance from several vendors who are producing exploratory compilers. Additional information can be found at hpc.gmu.edu/~upc.
This is the third year of exhibiting this project at SC. The Japanese government, through the Japan Society, supports this project for the promotion of science. The project's objective is to develop a parallelizing, restructuring compiler as well as related tools for parallel and heterogeneous distributed computing environments. The project puts equal emphasis on both practical and theoretical sides. As an outcome of the project, we have developed a software system called GCI for distributed computing. The GCI will be used as part of a Japanese version of GRID, which is planned to be interconnected to a global GRID.