TY - CHAP
T1 - PVM Grids to self-assembling virtual machines
AU - Geist, Al
PY - 2004
Y1 - 2004
N2 - Oak Ridge National Laboratory (ORNL) leads two of the five big Genomes-to-Life projects funded in the USA. As a part of these projects researchers at ORNL have been using PVM to build a computational biology grid that spans the USA. This talk will describe this effort, how it is built, and the unique features in PVM that led the researchers to choose PVM as their framework. The computations such as parallel BLAST are run on individual supercomputers or clusters within this P2P grid and are themselves written in PVM to exploit PVM's fault tolerant capabilities. We will then describe our recent progress in building an even more adaptable distributed virtual machine package called Harness. The Harness project includes research on a scalable, self-adapting core called H2O, and research on fault tolerant MPI. Harness software framework provides parallel software "plug-ins" that adapt the run-time system to changing application needs in real time. This past year we have demonstrated Harness' ability to self-assemble into a virtual machine specifically tailored for particular applications. Finally we will describe DOE's plan to create a National Leadership Computing Facility, which will house a 100 TF Cray X2 system, and a Cray Red Storm at ORNL, and an IBM Blue Gene system at Argonne National Lab. We will describe the scientific missions of this facility and the new concept of "computational end stations" being pioneered by the Facility.
AB - Oak Ridge National Laboratory (ORNL) leads two of the five big Genomes-to-Life projects funded in the USA. As a part of these projects researchers at ORNL have been using PVM to build a computational biology grid that spans the USA. This talk will describe this effort, how it is built, and the unique features in PVM that led the researchers to choose PVM as their framework. The computations such as parallel BLAST are run on individual supercomputers or clusters within this P2P grid and are themselves written in PVM to exploit PVM's fault tolerant capabilities. We will then describe our recent progress in building an even more adaptable distributed virtual machine package called Harness. The Harness project includes research on a scalable, self-adapting core called H2O, and research on fault tolerant MPI. Harness software framework provides parallel software "plug-ins" that adapt the run-time system to changing application needs in real time. This past year we have demonstrated Harness' ability to self-assemble into a virtual machine specifically tailored for particular applications. Finally we will describe DOE's plan to create a National Leadership Computing Facility, which will house a 100 TF Cray X2 system, and a Cray Red Storm at ORNL, and an IBM Blue Gene system at Argonne National Lab. We will describe the scientific missions of this facility and the new concept of "computational end stations" being pioneered by the Facility.
UR - http://www.scopus.com/inward/record.url?scp=35048835381&partnerID=8YFLogxK
U2 - 10.1007/978-3-540-30218-6_1
DO - 10.1007/978-3-540-30218-6_1
M3 - Chapter
AN - SCOPUS:35048835381
SN - 3540231633
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 1
EP - 4
BT - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
A2 - Kranzlmuller, Dieter
A2 - Kacsuk, Peter
A2 - Dongarra, Jack
PB - Springer Verlag
ER -