December 23, 2024

Shattering the Thermodynamic Limits: New Framework Reshapes Computing

Shattering The Thermodynamic Limits: New Framework Reshapes ComputingComputer Electronic Sensor Technology Art - Shattering The Thermodynamic Limits: New Framework Reshapes Computing
A recent study enhances the thermodynamic theory of computation, introducing methods to estimate the minimum energy costs for random and unpredictable computational tasks, aiming to boost computing efficiency. Credit: SciTechDaily.com

New research offers a refined approach to calculating the energy costs of computational processes, potentially paving the way for more energy-efficient computing and advanced chip designs.

Every computing system, whether biological or synthetic—ranging from cells and brains to laptops—incurs a cost. This cost isn’t merely the monetary price, which is straightforward to determine, but rather an energy cost tied to the effort needed to run a program and the heat that is released during the process.

Researchers at SFI and elsewhere have spent decades developing a thermodynamic theory of computation, but previous work on the energy cost has focused on basic symbolic computations — like the erasure of a single bit — that aren’t readily transferable to less predictable, real-world computing scenarios.

In a paper recently published in the journal Physical Review X, a quartet of physicists and computer scientists expand the modern theory of the thermodynamics of computation. By combining approaches from statistical physics and computer science, the researchers introduce mathematical equations that reveal the minimum and maximum predicted energy cost of computational processes that depend on randomness, which is a powerful tool in modern computers.

Practical Implications of New Research

In particular, the framework offers insights into how to compute energy-cost bounds on computational processes with an unpredictable finish. For example: A coin-flipping simulator may be instructed to stop flipping once it achieves 10 heads. In biology, a cell may stop producing a protein once it elicits a certain reaction from another cell. The “stopping times” of these processes, or the time required to achieve the goal for the first time, can vary from trial to trial. The new framework offers a straightforward way to calculate the lower bounds on the energy cost of those situations.

The research was conducted by SFI Professor David Wolpert, Gonzalo Manzano (Institute for Cross-Disciplinary Physics and Complex Systems, Spain), Édgar Roldán (Institute for Theoretical Physics, Italy), and SFI graduate fellow Gülce Kardes (CU Boulder). The study uncovers a way to lower-bound the energetic costs of arbitrary computational processes. For example: an algorithm that searches for a person’s first or last name in a database might stop running if it finds either, but we don’t know which one it found. “Many computational machines, when viewed as dynamical systems, have this property where if you jump from one state to another you really can’t go back to the original state in just one step,” says Kardes.

Theoretical Foundations and Historical Context

Wolpert began investigating ways to apply ideas from nonequilibrium statistical physics to the theory of computation about a decade ago. Computers, he says, are a system out of equilibrium, and stochastic thermodynamics gives physicists a way to study nonequilibrium systems. “If you put those two together, it seemed like all kinds of fireworks would come out, in an SFI kind of spirit,” he says.

In recent studies that laid the groundwork for this new paper, Wolpert and colleagues introduced the idea of a “mismatch cost,” or a measure of how much the cost of a computation exceeds Landauer’s bound. Proposed in 1961 by physicist Rolf Landauer, this limit defines the minimum amount of heat required to change information in a computer. Knowing the mismatch cost, Wolpert says, could inform strategies for reducing the overall energy cost of a system.

Across the Atlantic, co-authors Manzano and Roldán have been developing a tool from the mathematics of finance — the martingale theory — to address the thermodynamic behavior of small fluctuating systems at stopping times. Roldán et. al.’s “Martingales for Physicists” helped pave the way to successful applications of such a martingale approach in thermodynamics.

Wolpert, Kardes, Roldán, and Manzano extend these tools from stochastic thermodynamics to the calculation of a mismatch cost to common computational problems in their PRX paper.

Taken together, their research point to a new avenue for finding the lowest energy needed for computation in any system, no matter how it’s implemented. “It’s exposing a vast new set of issues,” Wolpert says.

It may also have a very practical application, in pointing to new ways to make computing more energy efficient. The National Science Foundation estimates that computers use between 5% and 9% of global generated power, but at current growth rates, that could reach 20% by 2030. But previous work by SFI researchers suggests modern computers are grossly inefficient: Biological systems, by contrast, are about 100,000 times more energy-efficient than human-built computers. Wolpert says that one of the primary motivations for a general thermodynamic theory of computation is to find new ways to reduce the energy consumption of real-world machines.

For instance, a better understanding of how algorithms and devices use energy to do certain tasks could point to more efficient computer chip architectures. Right now, says Wolpert, there’s no clear way to make physical chips that can carry out computational tasks using less energy.

“These kinds of techniques might provide a flashlight through the darkness,” he says.

Reference: “Thermodynamics of Computations with Absolute Irreversibility, Unidirectional Transitions, and Stochastic Computation Times” by Gonzalo Manzano, Gülce Kardeş, Édgar Roldán and David H. Wolpert, 13 May 2024, Physical Review X.
DOI: 10.1103/PhysRevX.14.021026