Petascale virtual machine: Computing on 100,000 processors

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In the 1990s the largest machines had a few thousand processors and PVM and MPI were key tools to making these machines useable. Now with the growing interest in Internet computing and the design of cellular architectures such as IBM’s Blue Gene computer, the scale of parallel computing has suddenly jumped to 100,000 processors or more. This talk will describe recent work at Oak Ridge National Lab on developing algorithms for petascale virtual machines and the development of a simulator, which runs on a Linux cluster, that has been used to test these algorithms on simulated 100.000 processor systems. This talk will also look at the Harness software environemnt and how it may be useful to increase the scalability, fault tolerance, and adaptability of applications on large-scale systems.

Original languageEnglish
Title of host publicationRecent Advances in Parallel Virtual Machine and Message Passing Interface - 9th European PVM/MPI Users' Group Meeting, Proceedings
EditorsDieter Kranzlmüller, Jens Volkert, Peter Kacsuk, Jack Dongarra
PublisherSpringer Verlag
Pages6
Number of pages1
ISBN (Print)3540442960, 9783540442967, 9783540442967
DOIs
StatePublished - 2002
Event9th European Parallel Virtual Machine and Message Passing Interface Users’ Group Meeting, PVM/MPI 2002 - Linz, Austria
Duration: Sep 29 2002Oct 2 2002

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume2474
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference9th European Parallel Virtual Machine and Message Passing Interface Users’ Group Meeting, PVM/MPI 2002
Country/TerritoryAustria
CityLinz
Period09/29/0210/2/02

Fingerprint

Dive into the research topics of 'Petascale virtual machine: Computing on 100,000 processors'. Together they form a unique fingerprint.

Cite this