Reversible Computing

A Requirement for Extreme Supercomputing

Michael P. Frank, FAMU-FSU College of Engineering
mpf@eng.fsu.edu

Abstract

Raw computer performance (in, say, logic gate operations per second) at a given level of power consumption is directly limited by the energy dissipated per bit operation. For traditional "irreversible" logic, which destructively overwrites old outputs, this energy is fundamentally limited by Landauer's principle to at least kT ln 2, where k is Boltzmann's constant and T is (for purposes of determining total energy cost, including cooling overheads) the temperature of the final heat sink (e.g., Earth's atmosphere). Furthermore, unless special adiabatic bit-erasure techniques are used, an even larger energy of kT ln N ≈ 100 kT (where 1/N is an acceptable bit-error probability) must be dissipated. Assuming that it takes ~100,000 logic-gate operations for a well-optimized processor to implement an average FLOP (including all architectural overheads), simple arithmetic shows that a 1 megawatt machine (at Earth's surface) could have a performance of no more than ~24 EFLOPS (E=1018), and a 1 ZFLOPS (Z=1021) machine would require at least 41 MW.

If we hope to compute with better energy-efficiency than this, then we must perform bit operations adiabatically, that is, without transforming much of the typical kT ln N bit energy into heat. Highly adiabatic (or thermodynamically reversible) operations must also be logically reversible, that is, they must perform one-to-one transformations of the local digital state. It has been known since at least 1963 that reversible operations are still computationally universal. Although various technologists have occasionally expressed vague doubts about whether a practical computer based on reversible logic principles could ever be built, ongoing research over the last twenty years has shown that these doubts are ill-founded.

For example, in the late 1990s at MIT, in the DARPA-funded Pendulum project, we successfully designed, built and tested a variety of CMOS VLSI chips (including a complete RISC-style microprocessor) that consisted entirely of adiabatic logic circuits, as a proof of concept. Many physical simulations, validated by empirical studies, have confirmed the long-term potential of reversible computing to circumvent the kT limit, and comprehensive analytical studies show that it can be cost-effective, despite its overheads. Active research towards practical reversible computing continues today at the University of Florida and at the FAMU-FSU College of Engineering, under a grant from the Semiconductor Research Corporation (a consortium of all the major chipmakers).

Although a number of significant research challenges do still remain, in the areas of high-quality resonators, low-resistance switching devices, power/clock signal distribution, and reversible logic synthesis and optimization, these problems which we are tackling in reversible computing today are, arguably, much less severe and more straightforward to solve than are the wide range of very fundamental physical barriers that presently threaten to halt the progress of traditional semiconductor technology, perhaps in only the next few years. To proceed very far beyond the limits of MOSFET technology requires that we implement reversible computing, in one form or another, as a prerequisite. But, to solve the remaining research problems will take time, so a more intense level of effort directed towards them needs to begin soon.

Thus, the high-performance computing community would be well-advised to turn an increasing amount of attention to reversible computing over the next few years, since it is really the only avenue (consistent with fundamental physics) that might possibly allow us to extend the historical power-performance trends out far enough to achieve our ambitions for extreme (Zettaflops-scale) supercomputing. In short, reversible computing is possible, it is necessary, and the time to start on it is now.