Why has it taken so long for internet multimedia to match TV and telephones in speed and latency?
April 25, 2009 11:34 PM   Subscribe

Why has it taken so long for internet multimedia to match TV and telephones in speed and latency?

It was only recently that my internet connection finally became able to deliver decent-quality streaming video & audio, and reliable voice over IP that doesn't suffer from delays and distortions. Yet this quality of video and audio multimedia has been possible for many decades through TV, telephone, and radio. Why is it seemingly so much more challenging technologically to deliver high-throughput signals over the internet than through these traditional media? Is it because one is digital and one is analog? Or because of the internet has a more complex architecture? Or something else?
posted by lunchbox to Computers & Internet (18 answers total) 4 users marked this as a favorite
 
It has taken this long for compression technology to deliver sufficiently good quality over the existing pipes. Couple that with bandwidth speeds that just weren't available 10-15 years ago (think DSL/Cable vs. dialup). It took this long for those things to meet up in the middle.
posted by Wild_Eep at 11:42 PM on April 25, 2009


Streaming Video
posted by bigmusic at 11:43 PM on April 25, 2009


It's because TV and telephones are already using dedicated transmission systems / networks that are optimized for them. Just because the internet can do everything, that doesn't mean it's the best at it, and many of the problems have only been overcome by brute force.
posted by smackfu at 11:53 PM on April 25, 2009 [2 favorites]


Comparing TV/radio/phone to the Internet isn't really fair. Each of those technologies works very differently; they're heavily optimized for one application and one kind of traffic.

E.g., television provides lots of bandwidth but is unidirectional and has very high latency. (There is frequently a multisecond delay between when an event actually occurs on "live" TV and when it appears on your screen; much longer delays are intentionally inserted and aren't uncommon.) The traditional phone system is bidirectional and has low latency, but it has just the bare minimum bandwidth required for voice communication delivered to endpoints. Attempts to do TV over the phone lines have mostly been unsuccessful, until recently.

The Internet tries to be all things to all people, and it does pretty impressively for this. It's a lot to ask one system to be able to do high-bandwidth broadcast very well, while also being able to do low-latency/low-bandwidth point-to-point well. When you have to do both, there are a lot of optimizations that you can't take, that you could do if you were only doing one or the other.

Much of the growing pains IPTV and VoIP are running into are due to problems encountered trying to squeeze the content down a pipe that's not necessarily an ideal fit. IP is not an ideal protocol for phone or for cable-esque television, but we want it to do both, so a lot of engineering gets done to try and shoehorn it in.
posted by Kadin2048 at 11:58 PM on April 25, 2009 [1 favorite]


The underlying technology behind the public phone service (aka the PSTN) has been digital for more than two decades. In a (very small) nutshell, the reason why its has low, consistent latency is because the PSTN is a circuit switched network, not a packet switched network like the Internet.
posted by zsazsa at 11:59 PM on April 25, 2009 [1 favorite]


The network design that became the Internet started as a military design for a reliable communications network. Reliability was achieved through massive redundancy of pathways: large portions of the network could be destroyed while leaving Los Angeles connected to New York. Also, unlike the phone network, which for each call dedicated a circuit, Internet traffic is left to "find its own way" across the network, so as long as a path exists, a data packet will arrive. It just may take longer because the pathway might be circuitous. Another layer of reliability was added with TCP, the current protocol for traffic on the Internet: it's structured so that packets that get lost can be resent. It's like using the postal service to communicate, rather than a telephone: If a letter is lost in the mail, it can be sent again after requesting another copy. Also like the postal service, if a particular link in the chain gets busy, traffic can be queued up for a period and sent later on, rather than dropping packets if current bandwidth is exceeded.

What this all adds up to is that the speed of the connection is not reliable--there's a lot of slack built in to the speed of transmission to accommodate reliable delivery. For TV, radio and phone, you have a reliable speed of delivery: the signal is broadcast through the air or through a dedicated wire, so it just moves at the speed that physics dictates, and the receiving device can simply display/vocalize the signal as it comes in. The tradeoff is that any interruption in the broadcast is also delivered at the same speed, with no recourse for repeating those parts that got lost in the static.

The challenge to allow reliable streaming video/audio on the Internet was basically to get the whole infrastructure operating reliably and quickly enough that the inherent sloppiness in the speed of delivery of Internet traffic didn't matter. Essentially, there needed to be enough bandwidth all along the route that delays were minor, and could be handled with a bit of buffering if necessary.
posted by fatbird at 12:03 AM on April 26, 2009


the PSTN is a circuit switched network, not a packet switched network like the Internet.

Is this still true? Years ago when I had to deal with AT&T, they bragged about how they'd upgraded the underlying frame relay network to be completely packet-switched, but with quality-of-service routing built in that guaranteed baseline data rates. IIRC, they said that the circuits were all virtual now.
posted by fatbird at 12:05 AM on April 26, 2009


fatbird, you're right. The performance of those virtual circuits is still guaranteed to be as good as a circuit switched network, though, and they can do that because they get to set all the rules, unlike out on the wilds of the Internet.
posted by zsazsa at 12:09 AM on April 26, 2009


and they can do that because they get to set all the rules, unlike out on the wilds of the Internet.

And this is the basis of the fight for network neutrality: AT&T (and everyone else) wants to use the QOS built into their system for virtual circuits, to offer guaranteed bandwidth to customers who pay for it. So Google coughs up the extra hundred million, and their search results are guaranteed to be delivered faster than Yahoo's. The antitrust implications alone are a bit staggering. Or more pertinently, Youtube videos are suddenly a lot faster and smoother than Vimeo's.
posted by fatbird at 12:13 AM on April 26, 2009


Hey now, be fair. Commercial internet has been available for, what. Less than two decades? Imagine a service which combined television technology of the early 60s with 1890s telephone technology. See how much more badass the internet is?

Okay, yes, I know, obviously we don't need to recreate the wheel with every new technology and the internet benefited from the invention of the telephone, tv, telegraph, radio ... my point is that it's still a comparatively recent technology and had to play catch-up. Besides which, it's a generalist, two-way communications media, not a specialist, one-way medium.
posted by bettafish at 3:03 AM on April 26, 2009


Best answer: While the above about the redundant nature of IP traffic and it having to be all things to all traffic is certainly true, I think the biggest reason is due to the specific network aspect, and the analog/digital nature.

First, analog vs digital. It's certainly true that building analog circuits and technology is simpler. Chuck a changing current down a wire, and you automatically get a controllable electromagnetic field coming out of it. Stick another wire up nearby, and you'll get back a rough copy of the original current. The control circuitry is pretty simple too - I built my own simple tunable radio set when I was about 10.

OTA broadcast TV and radio are the end result of this discovery, and had large dedicated ntworks built to take advantage of it. It's pretty wasteful though; it uses up a lot of spectrum to convey useful information, and you have to have really big transmission towers using a lot of power. And still, you have interferance, non recoverable lost data and high latency.

Digital TV fits several channels into the same space as 1 analog TV channel using compression, and only has a few percent of the power transmission as analog broadcast TV, yet produces a higher quality signal that's less prone to dropouts. In the UK at least, once analog TV is turned off and the bandwidth re-used, digital TV should be superior to analog TV in pretty much every way.

Telephone networks when first designed basically chucked a changing electrical current down a cable from one end to the other. There were no complex components behind encoding the human voice, merely a tuned microphone and speaker for the frequency range of the voice, and a long wire capable of carrying that signal. The whole phone network was designed with carrying that small frequency range long distances at relatively low power over a dedicated copper cable. Yes, there's since been many innovations using dedicated packet-switched networks in the middle bit, and digital tech in the handsets/exchanges, but fundamentally the cabling is still the same, and it's still low bandwidth, prone to interference and lossy. And international call charges are simply ruinously expensive still.

Analog cable for TV was again designed for the frequency spectrum, broadcast nature and latency tolerant nature of TV broadcasts.

So eventually, we get the internet. The tech required to build such a digital packet switched network is much more complex than that to build an analog broadcast system which is why it's only been around only about 3 decades, while TV and radio have been around over three times that, and telephones 4 times that. We then have the limitations that it's designed as a redundant fault tolerant system rather than a low-latency dedicated point-to-point system or high latency broadcast system, which the others above have covered nicely.

I was doing point-to-point fast frame-rate real-time video transmission and storage within a building over an IP network back in the early 90's, so the capability has been there a while - the problem is the last mile.

The internet so far gets the scraps the other techs leave behind. Modems tried to fit data in the same narrowband frequency as voice. The problem is, you don't get much - and you get all the latency overheads of IP traffic to boot. ADSL over the telephone line is using the unused high frequencies on the phone line, on a line that was designed for low power narrowband traffic. The lines are way too far from the exchange and way too lossy for much high frequency data. In the UK's case, the data part still routes over the old data network originally designed for modems and ATMs (though they are slowly upgrading to a proper digital backbone in the exchanges and datacentre. the 21CN upgrade). That ADSL works as well as it does is an absolutely miracle.

Wifi gets the small short-range fragments of the spectrum that TV, radio and mobile phones haven't already nabbed, and shares it with baby monitors and microwave ovens.

Cable TV providers built their network for TV. they grudingly put internet on the same wires, but again it's squeezed into the spare capacity, rather than given proper space of its own.

Where the last mile is a dedicated short fibre-optic cable to your house designed and allocated purely to internet traffic (and possibly voice as a piggy back) then you see what the internet is truly capable of - in various places in Europe, and places like South Korea, internet connectivity is 100Mb fibre in most of the country, not just the city centres.

Just as TV and phones had to have dedicated expensive transmission networks built just for them, so does high-quality high-speed internet access and governments have finally started to realise that over the last few years.
posted by ArkhanJG at 3:20 AM on April 26, 2009 [2 favorites]


Engineering is about achieving a result in the presence of design constraints.

The two biggest constraint categories are technical and economic.

Usually, what you see in the wild is a compromise/optimization involving both.

I am impressed at the thousands of examples of creativity that are responsible for the variety of things we have available within the technological constraints of the internet. As others have said, at its origin, it was more narrowly restricted to purposes other than which it is now being suboptimally applied.

Knowing some of the details of the info content of video makes me appreciate what I get right now a lot more than someone who has grown up with a world in which it already existed. A tiny YouTube video in a few seconds is pretty impressive when you appreciate all the elements in the chain.

It will certainly get better, but for the moment, I feel like I am living a Buck Rogers lifestyle, and I am the relative technology backwoods of central Vermont, where cell phone towers are as rare as tolerant conservatives.

Short answer without going into deep technological babble is that we're awaiting something that can deal with the bandwidth needs at a low enough total system cost to make it economically appealing. Some folks are looking at this problem within the constraints as they currently exist, and others are looking for revolutionary approaches which eliminate some key constraints (i.e., power line distribution, fiber optic). The latter come with their own set of limitations, but really, economics usually has the final say.

Otherwise, we'd all own our own Space Shuttle, something that's obviously technically feasible, but rather costly.
posted by FauxScot at 5:27 AM on April 26, 2009


1- Television/radio is broadcast in nature. One "pipe" that anyone can receive, with no added "costs" as more users listen in. Where the internet is, for all intents and purposes, not. Even if they are using multicast technology, for every user that begins to view a signal, a router somewhere has to find the signal and copy it down a pipe to the user. With enough people doing this, communication lines get saturated pretty quickly.

1a- Digital cable TV is changing this paradigm. For on demand style programming, they do indeed generate a separate signal for each user. This works out through good network design and the fact that the coaxial cable they use has tremendous bandwidth. The cable can carry what, 80 standard analog channels? And each analog channel can be converted into a digital channel, which I believe is 38 mbps a piece? That's more data than an OC48 connection. By splitting this into the right number of nodes, I read somewhere that digital cable will eventually be able to have an unlimited number of channels from the perspective of an end user. This, however, requires a lot of fiber getting laid out to the nodes and intelligent use of caching and multicast and whatnot. (How can Comcast have such a kick-ass data network, and such crappy customer service?)

2- The phone system is similar- it was specifically designed to carry voice data to the end users. Through the use of creative network design and a lot of really smart people working really hard (how much shit was invented by Bell Labs? The transistor, anyone?).

3- The internet, on the other hand, was NOT designed to transmit digital video. Getting a text-based network to do this requires a lot of hacks. Eventually, it will work out.
posted by gjc at 6:40 AM on April 26, 2009


Everyone go back and read smackfu's answer again. While all the other answers are facets of the problem, the A.1. issue is that all our existing lines which CAN be used for digital transmisions ARE still largely used for analog transmission. That's why all the TV networks make such a big smacking dealing about DTV conversion, it will save them millions and millions of dollars to just hold out till analog dies and then re-use the infrastructure already in place to delivery strictly digital content. IANAIE (not an internet expert) but I can attest that just based on how any bus-like sysrem works, the amount of overhead required to use the same lines for both digital AND analog is much greater than using it just for one or the other, making the end user experience suck worse during the transition that it did before, or will after.

The bottom line is, you answered your question in the description. The reason digital media is not up to speed with analog tv / phone is because of analog tv / phone.
posted by judge.mentok.the.mindtaker at 6:51 AM on April 26, 2009


Rather than what most people are saying above I think the answer to this is a simple one and it's merely increased bandwidth. You could have done multimedia stuff over the internet at the same speed and latency in 1989 if both the client and server had been at major universities or somewhere else where there was an enormous network capacity between IP hosts. (Though no one would have tried to because they would have been strangled by a campus IT administrator screaming, "Do you know how many million emails and Usenet messages we could have sent for the same price as what you just did!?! Just Fedex them a VHS tape!!!")

Building on zsazsa's answer, IP is a packet-based network that furthermore isn't designed to provide strict performance guarantees and consequently requires lots of bandwidth for "open flow" that produces the sort of performance you're seeing. There has been digital, computer-based video for decades as ArkhanJG mentioned but it was done over dedicated switched circuits like ISDN or specially-designed packet networks like ATM installations that were deployed specifically for carrying audio and video.
posted by XMLicious at 7:46 AM on April 26, 2009


Here's a completely non-technical analogy I just thought of, but it gets complicated so you'll have to bear with me. Are you familiar with the concept of maritime choke points? The idea basically describes the restraints on the complicated path a ship must take around continents and islands and things in going from its origin to its destination.

Video or audio transmitted on an analog circuit or a digital transmission through ISDN is something like shipping traffic going straight back and forth between destinations on the Grand Canal of China whereas transmitting through the internet or another large IP network is like shipping between Sevastopol in the Ukraine, on the Black Sea, to Singapore in SouthEast Asia, including negotiating areas that are complicated to navigate like the Straits of Magellan.

Imagine you're a merchant and a big customer tells you that they'll make an enormous order of your perishable product, big $$, but only if you can absolutely guarantee that you can provide a certain quantity, fresh, on their doorstep every morning. And there's huge $$ penalties if you don't.

If you're shipping on a route like a canal, it's no problem; you can be a fairly small mercantile outfit but as long as you put a shipment on a boat going down the canal every day there's going to be a steady stream coming in to the destination. Maybe you put double or triple the guaranteed amount in each shipment to cover the next day if there's an unforseen delay or accident and you have an agent in the destination city sell off the surplus if it isn't needed the next day.

But if the shipping route is something like Sevastopol-to-Singapore a small merchant couldn't make that guarantee: things like weather or pirates could take out or delay multiple ships. You really need to be a large merchant already doing tons of business Sevastopol-to-Singapore, and furthermore split your shipments across a large number of separate ships, preferably ones taking different routes around the world, to be able to guarantee that a ship is going to arrive in Singapore every day with enough product to satisfy the agreement with the customer.

And so furthermore, for the Sevastopol-to-Singapore shipping route to even begin delivering the required daily arrival of one of the merchant's ships, there has to be an even more immense number of ships that are simply going between Sevastopol and Singapore in the first place.

Your internet connection finally getting enough bandwidth to carry quality video between, say, Hulu's server and your home computer is like the general traffic in ships going back and forth from Sevastopol to Singapore finally getting large enough that a merchant could even consider taking on the kind of contract described above.

That was probably way too complicated an analogy to actually be helpful to anyone but it was fun to write.
posted by XMLicious at 8:48 AM on April 26, 2009 [1 favorite]


Because everyone loves to hate Microsoft.

All TV's used the same hardware. Cable was a basically a monopoly. Phones were made by one company. That's why the technology was so rapid. Everyone was on the same page.
posted by Zambrano at 9:46 AM on April 26, 2009


Response by poster: Thanks for the awesome replies! That explains it. There were a lot of great replies so I had trouble choosing a "Best Answer".
posted by lunchbox at 11:06 AM on April 26, 2009


« Older Ripping dvds automaticy   |   Recommendations for books about Crete. Newer »
This thread is closed to new comments.