Bit watt-hours?
October 10, 2009 8:46 PM   Subscribe

In terms of energy usage, how much would you estimate that it costs for a bit to travel over the internet?

Obviously you are going to have to make some pretty liberal assumptions such as distance, nodes traveled, equipment used. Please do not submit a response saying that it is impossible. Make your best guess using as little or as much data as possible!

I have always wondered how spam impacts energy usage. I also suspect that the cost may be fairly fixed (ie it doesn't matter how many bits sent, if the equipment is on, it's on), but I don't know enough about internet infrastructure to make a good guess.
posted by mr.dan to Computers & Internet (9 answers total) 1 user marked this as a favorite
 
I don't know, but Radio Netherlands just covered the issue of computers and energy use on its environment program, Earthbeat. I didn't listen to the whole broadcast, but perhaps they have an answer? (Or, maybe that's what prompted your question?)
posted by embrangled at 8:57 PM on October 10, 2009


Slate took a crack at that. Seemed like the numbers had a really high bogosity factor to me, though.

I'm not really sure there's any way to know. (And is it compared to sitting in a room in the dark? Instead of driving to the library?)
posted by meta_eli at 9:04 PM on October 10, 2009


Well, like you said, it's a fixed cost because the equipment is on whether or not useful data is going through them. Routing equipment under load will probably draw more power, but that data will be hard to get.

If you assume a switch can put through a bandwidth of X gbps running on Y watts, then the energy used for each bit is Y/(X*1,000,000,000) J. Then, figure out the average number of hops you have and multiply by that for the entire trip. An issue might be that small routers have a lower power efficiency than big ones.
posted by demiurge at 9:19 PM on October 10, 2009


The costs would be pretty easy to figure out with the right information. You simply take the electricity usage for each router along the path during the time it takes said router to transfer, say, 1 terabit. Then you divide by one trillion (or 240 if you're talking about a 'binary' terabyte, but networking equipment is usually measured in base 10. Anyway...)

Modern routing equipment actually does use less electricity when it's idle. You could do some research on the wattage of various networking eqipment and it's speed. I looked around a bit and I think there's a lot of variance. I household router that can do 1gb speeds probably runs around 5/10 watts, though. So that would be about one nanojoule per bit at full capacity.
posted by delmoi at 9:26 PM on October 10, 2009


High end core routers draw a great deal of power, in the many kilowatt range. But it's all fixed cost as they draw that whether they are carrying traffic or not, for the most part. Thus your question isn't a matter of fuzzy assumptions. It's just not answerable. It would be like asking how much energy does your living room lamp waste when your eyes are open. It doesn't matter if your eyes are open or closed.

The hidden question you asked is about spam and the greatest waste from spam is just labor cost spent fighting it on the IT and legal fronts.
posted by chairface at 11:31 PM on October 10, 2009


Mikey-San, the ones are fine, it's the zeros that'll getcha.

The storage cost of spam on the receiving mail server is probably a lot more than the electricity cost.
posted by aubilenon at 11:59 PM on October 10, 2009


Thus your question isn't a matter of fuzzy assumptions. It's just not answerable. It would be like asking how much energy does your living room lamp waste when your eyes are open. It doesn't matter if your eyes are open or closed.

No, you just consider the time that your eyes are closed to be waste. So if the light is on 24/7, and you spend 6 hours a day in that room with your eyes open, then you just multiply by 4 to get the "amount of energy used per second of visual input"

For routers, you just calculate the total number of bits that go through over a certain amount of time along with the amount of energy used during that time period. Even if the hardware could handle 100gbps, if it only transfers 100 gigabytes in a day, you use that.
posted by delmoi at 2:00 AM on October 11, 2009


Best answer: I think "over the internet" is the ambiguous part, as the OP points out ("liberal assumptions such as distance, nodes traveled, equipment used"). But if you change it to bit-meters per joule, I think the question has a pretty natural interpretation.

I.e., how many bit-meters were transmitted yesterday worldwide, and how much energy was used by the various devices that enabled that?
posted by hAndrew at 3:16 AM on October 11, 2009


It's just not answerable.

My car has a cost per mile, even though it isn't being used 24 hours a day. If I drive it 10K miles a year, and my average annual expense for gas, tires, maintenance, insurance, depreciation, parking, etc., is $400, then it makes sense to assign $0.40 as the cost per mile. There are other ways to view the cost, but this is a perfectly legitmate one. The question is quite answerable, given the data. (Whether that data is available is another question.)

A router's cost may be fixed in the sense that it is powered up whether it is in use or idle, but it incurs an annual cost for power, replacement, housing, etc., and it routes some number of bits in a year. Given that data, its cost per bit is that ratio. A commercial Cisco backbone router and a personal Linksys 4-port router might have very different cots, but they are knowable, if the data can be made available.
posted by TruncatedTiller at 11:16 AM on October 11, 2009


« Older Old but awesome shareware.   |   Name this philosophical movie about the earth's... Newer »
This thread is closed to new comments.