No, keyboard cat, you can't play this off.
June 14, 2010 9:35 AM   Subscribe

Why don't satellites and probes send back video from space?

NASA puts out a steady stream of images from Mars rovers, solar probes, moon landers, and more. Huge, beautiful, color (or color-added) images, too. So why is there no video? (I've found some videos, but they're either of rockets exiting the atmosphere or composite images strung together in a short loop.)

We have satellites looping around Titan, machines staring into the face of Martian duststorms, probes plunging into the atmosphere of Venus and bursting into flames...I would LOVE to see this in real time, and as high-def as can be managed.

Is it a technical limitation? Budgetary? I've read that every extra pound on the rocket is another 10K in cost. Is it an issue with data transmission over such vast distances? It seems like this is an opportunity to get the public really interested in space exploration vicariously. What are the stumbling blocks here?
posted by greenland to Technology (21 answers total) 4 users marked this as a favorite
 
Bandwidth issues, most likely. Video demands fat pipes and satellites are designed under rather stringent cost constraints. Pictures are easier to beam back hundreds of millions of miles than is video.
posted by dfriedman at 9:45 AM on June 14, 2010


Best answer: A combination of bandwidth and availability of communications, I think. This article claims the Mars Rovers had bandwidth about "five times the speed of home dial-up", which doesn't seem good enough for full-motion video. Add to that the fact that the probes are not in constant contact with the Earth (based on the relative positions of probe and the Earth and the availability of receiver stations in line-of-sight to the probe), means that a given probe might only get (number-out-of-the-ass) an hour a day to talk to Earth, say. Video would eat up a lot of that limited time.
posted by backseatpilot at 9:48 AM on June 14, 2010


I don't understand the bandwidth issue either. Haven't we had Communications Satellites basically transmitting TV signals for decades?
posted by vacapinta at 9:53 AM on June 14, 2010


Best answer: It's dark and what they're imaging is usually very far away. Even when they're imaging in the visible spectrum it's pretty complicated just to get a good still image.
posted by IanMorr at 9:58 AM on June 14, 2010


Because nothing is happening quickly enough for video to be of any interest. Landing is the only case where it might be nice, but that is the single most challenging part of any probe's mission, and nobody's going to be willing to put a video camera where it is going to be ablated away in two seconds. Even the dust storms on mars are a fairly slow phenomenon. Add the bandwidth constraint on top of this, and there's just no benefit to justify the cost.
posted by kiltedtaco at 9:58 AM on June 14, 2010


Best answer: Deep space hardware is extremely bandwidth-constrained, relative to what you're used to here on Earth. Cassini at Saturn has a maximum download bandwidth of 166 kbps, or 20.75 KB per second, and that is only during the brief windows each day when NASA's Deep Space Network antennas (enormous dish antennas scattered around the world) are targeting and listening to/for Cassini specifically. As it is, there is enormous challenge in simply downlinking the science data collected, and so there isn't really much capacity to send video bits down to Earth.

This is not to say that it COULDN'T be done. But the purpose of robotic space science missions has traditionally been, you know, science data collection rather than entertainment. And there's very little science you can do with HD video that you can't do with HD still imagery. Remember, on an interplanetary spacecraft the scenery tends to change pretty slowly.

It's rumored that there will be HD video hardware on NASA's next-generation rover. Apparently NASA was going to remove the hardware from the vehicle to save mass, but James Cameron heard about it, got in touch with NASA brass, and argued strenuously for its inclusion for PR purposes. So you'll see high-definition video from Mars. But Mars is a special case: the data transmission situation there is much better than it is elsewhere in the solar system because we have not one but multiple communications satellites currently in orbit there; each of these has lots of electrical power dedicated to big uplink transmitters.
posted by killdevil at 9:59 AM on June 14, 2010 [1 favorite]


Bandwidth + power. Satellites just bounce signals and have, relative to interplanetary probes, a lot of power available via bigger solar panels and higher light flux density. It takes more power to record, compress and transmit video. Also, I think researchers would rather have higher resolution still images vs lower definition video.

But the probes that got launched decades ago simply didn't have access to modern higher-speed imaging sensors and CPUs to transmit video. Voyager 1 would have been designed in the very early 70's prior to its 1977 launch. Imagine how TV equipment looked then vs now.
posted by GuyZero at 9:59 AM on June 14, 2010


A lot of the color images we see from satellites like Hubble are colored here on earth.

Bandwidth is most certainly a factor, but you also have to remember that the probes that are far from Earth were created a long time ago. The Voyager probes for example were launched in 1977. Of the three Mars rovers the first landed in 1997 and the subsequent two in 2004. 2004 sounds fairly recent, but you also have to take in to account that they probably developed it for years before launch and it takes a year to get there (well, 8 months-ish I guess).

What I'm saying is there is a lag between available technology and what we shoot into space. Figure that by the time the next generation of humans are watching stuff being beamed back they'll be seeing it in 3D high def with scratch and sniff technology.
posted by Gainesvillain at 10:00 AM on June 14, 2010


Best answer: Not enough bandwidth. For example, here's an article about Spirit and Opportunity's data links. 11,000 bps straight to earth, barely fast enough for pictures, and 128,000 bps relayed through a satellite orbiting Mars, which is decent and could theoretically support some awful-looking video. Think Real player circa 1999.

But things are looking up. This interesting paper on data rate limitations from Mars says that the next Mars rover, the Mars Science Laboratory has a camera capable of capturing high definition video. It doesn't present any solutions for getting that data to Earth in a reasonable timeframe, though.

On preview, vacapinta: communication satellites are "only" 22,000 miles away. The Moon, which we have gotten video from, is 238,857 miles away. Depending on where the planets' orbits are, the distance from Mars to Earth varies between 34 million and 249 million miles. That's a long way to go for a signal to go, especially one sent by a little rover whose solar panels get increasingly covered with dust.
posted by zsazsa at 10:01 AM on June 14, 2010


Because getting video from outer space is expensive, and the kinds of things we're looking for don't require video to better understand them.

Getting a Martian duststorm on video would be sweet, but we don't need video of a duststorm to understand the Martian atmosphere, Martian weather, Martian gravity, Martian soil, etc.

If you dropped a video camera into the Titan atmosphere, you'd spend millions of dollars to get hours and hours of footage of ... well, not much, actually. Not enough to justify the expense.

Let's say there's fish swimming around beneath the ice of Europa. Step 1 would be to put any sort of a camera or sensor beneath the ice. Step 2 would be to get a single-frame picture of a Europan salmon. Step 3 would be getting video of the damn thing swimming.

Step 4 would be to broil it with butter and lemon.
posted by Cool Papa Bell at 10:03 AM on June 14, 2010


For what it's worth, the JAXA KAGUYA vehicle had a 1080p camera onboard filming the moon.
posted by saeculorum at 10:03 AM on June 14, 2010 [1 favorite]


Vacapinta, perhaps some engineering types will show up here to explain the details (I'm not going to attempt a coherent explanation without a license), but suffice it to say that sending data 1.246 billion kilometers from Saturn is much harder than sending it a few hundred miles from geosynchronous communications satellite. Transmitter power is low (and the received signal strength is REALLY, REALLY low), so bit rates are very limited even with extremely sophisticated gear on the receiving end.
posted by killdevil at 10:08 AM on June 14, 2010


Best answer: Bandwidth is a big one, but also the design of CCD image sensors comes into play. To get good video, you need something called an "interline transfer" CCD, which gives up half its photon-sensing area in exchange for the ability to acquire sequential images smoothly. Scientific imaging is best done with a "frame transfer" CCD, which is more sensitive.

Bandwidth over distance is really a hard problem. Consider that your cellphone can run for a few hours on a battery the size of a fig newton, using a small fraction of a watt to send audio perhaps a mile or two, to the nearest tower. But a typical AM radio station, to send the same amount of information (a fairly low-fi audio signal) perhaps a hundred miles, uses the better part of a megawatt. When you increase the distance a signal must travel, you need to put a lot more power behind it to make it intelligible at the receiver.

Antenna gain can help a little -- we have massive dishes on earth that focus their sensitivity on the incredibly faint signals coming from distant probes -- but still, the amount of data you can get back over distances like that is miniscule. It's not a question of money, but simply practicality. Do you want to quadruple the size of the probe to include enough solar panels to gather enough energy to power a larger transmitter to send enough data that you might be able to squeak through a video clip every few weeks? Or is a still image every few hours adequate?

If you want to run the numbers on this stuff to really get your head around it, consider getting your amateur radio license. Studying for the exam will immerse you in the concepts of power, gain, path loss, noise, and bandwidth. Hams deal with this all the time, and the knowledge you gain will give you a better understanding of things you use every day, like phones and wifi.
posted by Myself at 10:24 AM on June 14, 2010 [1 favorite]


Wasn't this the mission for Al Gore's satellite? (Follow the link for an update.) A video camera pointed at day-side Earth from a high orbit, so anyone could check up on our planetary disk at any time?
posted by Rash at 10:24 AM on June 14, 2010


Some people have discussed the technical drawbacks for interplanetary broadband, but there's a practical one too: many of the events we're interested in "watching" take place over hours, days, even weeks, rather than the few minutes we expect from most videos. Planetary bodies move at tremendous speeds, but they do it across tremendous distances, i.e. even at almost 30km/s, it takes the Earth a year to revolve around the sun just once.

For example, the composite video here actually captures two whole months of real time. As a result, you'd have to sort through days of utterly boring "video" with very little actual motion to get a few seconds of something interesting, e.g. a collision.

Given the technical difficulties discussed above, it would seem that there's so little to be gained by actual footage that it's just not worth it. One image an hour is more than plenty the vast majority of the time.
posted by valkyryn at 10:36 AM on June 14, 2010


Response by poster: These are amazing, succinct answers. I wish I could mark everything as best.
posted by greenland at 10:47 AM on June 14, 2010


It's really amazing that we can communicate with those satellites at all. Voyager 1 is 16,858,000,000 km from Earth and has a transmitter power output of 23 Watts. Now compare that to your average FM radio station with TPO of maybe 20,000 Watts (note, not ERP) and it can be picked up for maybe 50-80 km. The difference is that communication with things very far away requires very high gain antennas, but even with those huge antennas of the Deep Space Network the signal has such low power by the time that it reaches Earth that it's nearly indistinguishable from noise. There are some tricks that you can do that let you trade off bandwidth for noise. For example suppose the transmitter repeats its signal 5 times. Then the receiver can record all 5 transmissions and average them together. Since the noise is random it will tend to average out, leaving the signal strong enough to be read. This kind of trade-off means that you can pick out these extremely faint signals from billions of kilometers away out of the noise where you wouldn't have been able to before. That is why these probes all have the bandwidth of an old dial-up modem despite having the most sophisticated electronics and antennas available to man at the time they were designed.
posted by Rhomboid at 10:58 AM on June 14, 2010 [1 favorite]


Oh, and for a primer on the challenges of deep-space data transmission, Google the Galileo Jupiter mission and its fouled main antenna, which remained catastrophically jammed and therefore undeployed for the entirety of that spacecraft's lifetime. The engineers supporting Galileo came up with an incredible set of workarounds for this problem that involved (a) use of a backup transmitter on the spacecraft with a capacity of about 5000 bits per second, (b) storage of unsent data on a single onboard electromechanical data tape recorder that was never designed to be used for the purpose. And this was all worked out, and elaborately patched flight software uplinked, *while* Galileo was in transit to Jupiter - saving a mission most thought was completely ruined by the snagged antenna.
posted by killdevil at 11:11 AM on June 14, 2010 [2 favorites]


And another good one to read about: Cassini's Huygens Titan descent probe was launched with faulty receiver firmware that could not accommodate the frequency-shifting of signals returned from the surface of Titan due to the large delta-V of the lander relative to the Cassini orbiter.

One single engineer working for ESA, which built Huygens, discovered the existence of this design flaw while running some tests of the hardware during Cassini's transit to Jupiter. Since there was no way to modify the Huygens receiver firmware remotely, the solution eventually decided upon was to alter Cassini's trajectory on approach to Titan after the release of Huygens so as to lower the Doppler-shifting of the probe's signals to values that could be handled successfully by the flawed receiver onboard the orbiter.

It's a cool story.
posted by killdevil at 11:36 AM on June 14, 2010 [1 favorite]


Err, Cassini's transit to Saturn
posted by killdevil at 11:46 AM on June 14, 2010


Vacapinta, perhaps some engineering types will show up here to explain the details (I'm not going to attempt a coherent explanation without a license), but suffice it to say that sending data 1.246 billion kilometers from Saturn is much harder than sending it a few hundred miles from geosynchronous communications satellite. Transmitter power is low (and the received signal strength is REALLY, REALLY low), so bit rates are very limited even with extremely sophisticated gear on the receiving end.

Exactly. Someone else can do the math, but even if they were able to get a transmitter to broadcast full power across just one degree, the signal spread would be really huge by the time it got to earth. It's something like trying to get cell service on the moon.
posted by gjc at 5:51 PM on June 14, 2010


« Older Slide Whistle Emergency   |   Here, Queer & Invisible Newer »
This thread is closed to new comments.