Ask Jonathan Koomey About 'Koomey's Law' 52
A few weeks back, we posted a story here that described Koomey's Law, which (in the spirit of Moore's Law) identifies a long-standing trend in computer technology. While Moore's prediction centers on the transistor density of microprocessors, Jonathan Koomey focuses instead on computing efficiency — in a nutshell, computing power per watt, rather than only per square nanometer. In particular, he asserts that the energy efficiency of computing doubles every 1.5 years. (He points out that calling this a "law" isn't his idea, or his doing — but it's sure a catchy turn of phrase.) Koomey has agreed to respond to your questions about his research and conclusions in the world of computing efficiency. Please observe the Slashdot interview guidelines: ask as many questions as you want, but please keep them to one per comment.
Your Take on Futurists? (Score:5, Interesting)
GodfatherofSoul's Law (Score:1)
Find an arbitrary pattern or trend, then name it after yourself.
Lets work this backwards ... (Score:3)
Infinity w/ reversible computing? (Score:2)
This one doesn't seem to have fundamental physical limits, so long as we eventually transition to reversible computing [wikipedia.org], in which the computer does not use up useful energy because every process it uses is fully reversible (i.e. the original state could be inferred).
All the limits on computation (except regarding storage) that you hear about (e.g. Landauer limit) are on irreversible computing, which is how current architecture works. It is the irreversibility of an operation that causes it to increase entro
Re:Infinity w/ reversible computing? (Score:4, Insightful)
Sorry, the question there wasn't clear. How about "Could the whole process be bypassed by the near-infinite efficiency of reversible computers?"
Re: (Score:2)
Particles traveling FTL might make it necessary to redefine the laws of thermodynamics slightly.)
Do you have anything other than a W.A.G. or am I missing something? The best argument I can come up with is maxwells demon pretty much needs to operate supersonically, using some kind of electrical design doesn't help, so operating faster than c wouldn't "help" because c is already a zillion mach number so a couple ppm more won't change the result. Everything else in the thermodynamics laws operates subsonic, way less than c...
I do respect that an unknown cause could also have a theoretical effect on ther
Re: (Score:1)
All the limits on computation (except regarding storage) that you hear about (e.g. Landauer limit) are on irreversible computing, which is how current architecture works. It is the irreversibility of an operation that causes it to increase entropy.
No. The loss of power as heat has nothing whatsoever to do with the reversibility of computing operations.
Power is lost (and thermal entropy increased) because of the electrical resistivity of the materials from which our CPUs are made.
If you discover a room-temperature superconductor, please let the rest of the world know, because the researchers haven't found it yet. Bonus points if you can fabricate a modern CPU using that superconducting material.
Re: (Score:2)
Power is lost (and thermal entropy increased) because of the electrical resistivity of the materials from which our CPUs are made.
Right, like I said: it produces entropy because you are using a reversible process to do it -- in this case, a wire that heats up.
Yes, it's *difficult* right now to get the computation speeds we want using only reversible processes. I didn't intend to suggest otherwise.
Multicore or System on a Chip Speed bumps? (Score:2, Interesting)
Re: (Score:2)
I do not understand how multi-core subverts our craving for transistor density? You kind of need that to increase the core count. The comes form the lack really good tools for parallel programing.
Re: (Score:2)
>>I do not understand how multi-core subverts our craving for transistor density
Look I do understand what you just said but think with the herd ..
back in the 80's ( early ), a phrase used in corporate America, it was " no on got fired for buying IBM "
sometime is the mid-late 80's the 80386 came out, and corporate america said they had those PC's prior to them being sold to the market.
back then it was a MHZ race
people will no longer brag about speed, they will brag about core count, with the thinking t
Re: (Score:2)
Well I do feel that we are at that speed threshold. Mobile will drive down power so we are are already there. I am waiting for the next big jump. There is now reason why a Tegra II could not power the average desktop pc.
Re: (Score:2)
Really? I feel that the speed threshold is very near, it's not like we use all the power already that the chip can do, but the consumer will want all the extra power it can get. I think, I believe that we are nearing the start of using parallel programing so as to take advantage of all the extra cores. most likely it will be the Audio/Visual segment of the market that will create the software for it ( Game designers i am willing to bet ).
We have seen this already done by splitting off the graphics to it's o
How will programmers affect this? (Score:2)
While sarcastic your question is an important one: as computing power has increased, the tendency of coders to just ride over badly coded underlayers rather than redesign them competently and efficiently has increased. Why bother cutting out bloat that causes an 80% penalty on system efficiency when you can just use a more efficient chipset to get the same result?
So my question is whether Koomey has put any thought into similarly quantifying the opposing software bloat factor, and what he sees the total ba
Make the curve longer. (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Koomey's law only relates to the amount of power required to operate an electronic device [technologyreview.com]. The very purpose of laws like Koomey and Moore is to describe advances in electronics. While perhaps the amount of energy involved in the unwinding of the mainspring of a mechanical computer can be analysed, I think you'll find that you'd be hard-pressed to get meaningful figures for the energy involved in the operation of an abacus or slide rule—which aren't even complete calculating devices and rely very heavi
Re: (Score:2)
They're not actually computing machines.
Well, not generally programmable computing machines.
Re: (Score:2)
Re: (Score:2)
OK, I concede they are not programmable. They certainly (in my opinion) should be considered computing machines. However, I left off of my "request list" both programmable analog computers and plug programmable punch card equipment. Today's engineers may laugh, but I was able to do some pretty amazing things with both of those types of hardware. You work with the tools you have.
I don't know if these fit his proposed curve or not. I would just like to see the result of thinking about that question.
Re: (Score:2)
Especially in the era of cloud computing. Why does it matter? You know, even if you are not aware of it because Amazon provides you with Infrastructure as a Service, you are running virtual machines, which, wait for it: are running on real hardware :)
And of course amazon has real limitations with how many machines can be fed on a single datacenter and how much this power consumption cost is.
Note that power consumption can easily be 50% of the datacenter, that for every watt spent computing there is about 2.
Re: (Score:2)
How can anyone take "cloud computing" seriously? It's really just a much less efficient version of the age old distributed computing paradigm. All it does is enable people who cannot wrap their heads around complex clustering topics to write extremely wasteful applications, and give management a new buzz-word dejour.
Re: (Score:2)
Let's say you get 2 hrs into your sim and you realize you made a mistake in coding, forgot something,
saw initial results and realized you could trim things up to make it run better or turn out better results...
Or if you were taking the resources you were about to use more seriously, you might not be so quick to start a run before properly testing your code.
Hey, I'm all for commodity distributed computing. Would be nice if you could sell back, but even still. However, the branding of a suite of decade-old technologies as "cloud computing" has been rather silly. That's all I'm saying.
Moral/Ethical (Score:2)
OK J.K here is the list of moral / ethical arguments about the path we're on, as seen in your law. You saw the path clearly enough to define a time based law. Are there any issues I'm not seeing on our current path?
1) Lower energy consumption at point of use
2) Higher energy consumption at manufacturing point
3) faster cpu = bigger programs = more bugs = lower quality of life
4) faster cpu = stronger DRM possibilities
5) Better processing * battery life = better medical devices
6) Better processing * battery l
battery capacity vs proc speed (Score:2)
Hey J.K. have you run into a law relating battery capacity (either per Kg or L) vs proc speed over time? I bet there is some kind of interesting curve for mobile devices. Or, maybe not, donno thats why I'm asking a guy with previous success at data analysis in a closely related field...
Here's one: Gates' Law (Score:1)
I have one:
Gates' Law: "The bloatedness of software keeps pace exactly with the increase in power of hardware, to ensure that no actual improvements occur in the end user experience."
Also called Wirth's law (Score:2)
Feynman Quote (Score:3)
Feynman indicated that there was approximately 100 billion times efficiency improvement possible, and 40,000 times improvement has happened so far.
If we take Feynman's number at face value, this means that if computing efficiency improvements continue at the current rate (doubling every 18 months,) we will reach the theoretical maximum in 2043.
Based on that, do you believe that we will see a dramatic reduction in efficiency improvements in the next 10-20 years as we approach the theoretical limit, or do you think Feynman was conservative in his estimate?
Thanks!