Computing Heat
June 30, 2018 3:29 AM   Subscribe

Does a powerful computer performing a difficult calculation output twice the heat of a computer of half that power performing a calculation of half the difficulty? In other words, does heat output scale evenly with computational power and task difficulty?
posted by Quisp Lover to Computers & Internet (11 answers total) 2 users marked this as a favorite
 
It depends on the specific computer and the specific calculation. And it is, of course, an approximation since there is some heat generated by the usual housekeeping stuff that doesn't really change regardless of what it's doing.

Basically, most modern computers don't look to operate that way at first glance. That said, if it has to flip transistors twice as much to do a given operation it will generate twice as much heat (leaving aside frequency/voltage scaling) simply because it takes a certain amount of energy to change the state of the transistor.
posted by wierdo at 3:56 AM on June 30, 2018 [2 favorites]


Response by poster: I may have phrased my question poorly. Here's the proposition I'm trying to support: A more powerful computer can complete a difficult task faster than a less powerful computer, but the heat output is the same.

Is that true? Or does increased power proportionally improve speed AND reduce heat output?
posted by Quisp Lover at 4:19 AM on June 30, 2018


Best answer: Theoretically you are correct. Thermodynamically speaking, there is a minimum energy required to flip a bit no matter where technology you're using.

In practice, a higher clocked/more advanced CPU will use less energy (on average) than older stuff because it will complete the calculation more quickly and then go into a low power state where it uses very little energy. Since energy and heat output are proportional for a given processor type, the same applies to heat.

Basically, this is one of those places where practical inefficiencies make the theory look a bit wrong in the real world, but if we could make a perfectly efficient processor, your intuition would be correct since a given number of bit flips would use the same energy and generate the same heat regardless of the rate at which the processing happens.
posted by wierdo at 4:37 AM on June 30, 2018 [1 favorite]


As a rule of thumb, the larger the fabrication technology and the higher the processor complexity, the more heat it will generate. So a 14 nm Skylake chip will kick the crap out of a 65 nm Conroe at the same heat output because it's both smaller and better.
posted by Bangaioh at 5:03 AM on June 30, 2018 [2 favorites]


It also partly depends if the computers you're talking about are new and old, or both new. Very roughly speaking, a current-technology server-class CPU and current-technology energy efficient CPU will use in the ballpark of the same amount of power to perform the same computation (and emit in the ballpark of the same amount of heat). But efficiency improves over time, so if the less powerful system was high performance in its day but is now just old, it'll draw more energy and emit more heat than a modern low-power system of the same capability.

Plus, there are also the theoretical limits that weirdo mentioned.

So the real answer is "perhaps, perhaps not, it depends".
posted by russm at 5:03 AM on June 30, 2018 [1 favorite]


Thermodynamically... believe it or not, it's the erasure of memory which generates heat, rather than xors, nots, ands, etc. So, theoretically, if you've got sufficient memory (or you can do the calculation reversibly), you don't have to generate heat at all.

Obviously, in practice, the answer is completely different. But that goes to show that the difference between two computer systems isn't based on a fundamental thermodynamic principle/scaling law as much as it's a function of the particulars of the system and the calculation.
posted by cgs06 at 5:39 AM on June 30, 2018 [1 favorite]


Er... XORs and ANDs are not actually reversible, but one can make reversible versions. NOTs are reversible as implemented.
posted by cgs06 at 6:20 AM on June 30, 2018 [1 favorite]


The number you're looking for is FLOPS per watt (FLOPS is floating point operations - a pretty good approximation for maths workloads)

As others have said, as a general rule the process size (in nanometers) dictates this. The CPU architecture will have some effect - a complex Intel processor will do much more, but use much more power than a really simple ARM core, but the main thing dictating the FLOPS per watt is the process size. Small transistors use less power to switch.

This article kind of answers your question with data. The conclusion (the graph at the end of page 2) shows that the power/performance curve of a range of processor types is pretty linear.

Whether a complex, power hungry processor like an Intel Xeon or a whole load of little ARM cores gives more calculations for your watt depends mainly on the workload you're doing.

So to answer your question: If the big powerful computer and the little low power computer are of the same generation - the process size used is the same - then they will give pretty similar performance / Watt and so the total heat produced will be similar. If either one is from an older fabrication process, the performance will be worse.
posted by leo_r at 6:33 AM on June 30, 2018 [2 favorites]


Leaving aside theoretical concerns: in practical terms, the only good answer is "it's complicated".

Modern CPUs have a non-trivial relationship between clock speed and power consumption; see this graph for an example. There's a static power cost that depends on the core voltage, and there's a switching cost that increases dramatically as the clock speed approaches the maximum that the transistors can handle. The upshot is that at low speeds, increasing the clock rate uses less power per unit of computation, and at high speeds, it uses more; somewhere in the middle there's a sweet spot that depends on the workload.

But there are plenty of other factors that affect both performance and power consumption between different CPUs; clock rate isn't the all-determining performance number that it used to be. Newer processor architectures support more advanced vector instructions, which can perform calculations more quickly and efficiently than serial code. High-performance server CPUs tend to have huge caches and speculative execution engines to eke out as much speed as possible, at a substantial power cost. Going in the other direction, you can find ARM chips that have a mixture of slow/efficient and fast/inefficient cores on the same die, with the expectation that the OS can try to make decisions about which cores to enable at any given time. And so on.
posted by teraflop at 6:40 AM on June 30, 2018 [3 favorites]


You may or may not want to read up on Reversible computing - Wikipedia. I think it's also a chapter in the Feynman Physics Lectures or such. This path of inquiry leads down to things like how much computation can you do if you build a dyson sphere around the sun and use all the energy you can get over the gradient from the inside to the out. It also leads into the problems with planet sized intelligence. Tread lightly. :)
posted by zengargoyle at 1:55 PM on June 30, 2018


In practice there is no such correlation. Your high-end overclocked workstation is going to use a lot more power, actually, because the scaling of frequency (speed in gigahertz) is really bad as far as power. In fact, that's the reason why clock speeds increased continuously since the invention of the transistor until basically plateauing around 2-3 GHz. But a really old computer would use a lot more power as well, because its transistors are bigger. In general, the best from a power standpoint would be a newer laptop or cell phone (so it has small transistors) running at a slow speed. A featherweight laptop or flagship phone running at 1.5GHz is the most power efficient.
posted by wnissen at 9:41 AM on July 2, 2018


« Older Sudden, tragic death of friend. How to make it...   |   Need a good email client for Windows 10 Newer »
This thread is closed to new comments.