Why did CPUs stop getting faster about 5 years ago?
December 9, 2007 4:35 PM   Subscribe

Why are CPUs not getting faster any more? This plot is some data I scraped from a few sources around the web. I know it's not exhaustive, but I'm looking to pick out long-term and general trends. It's clear to me that processor clock speed has plateau'ed around roughly 4 GHz. Why?

I am aware of Moore's law, and the megahertz myth, and things like bus speeds and cache sizes and instruction sets and that comparing clock speeds across processors isn't especially meaningful. I know that Intel stopped pumping up the clock cycles, I know about pipelining and predictive branching and multiple cores and which bottlenecks are where in a computer.. I understand that despite the CPU speed flattening, actual computing power has continued to increase.

However, surely if a chip designer can run the core at a faster clock, they would. Why can't they?

My understanding was that this was essentially an issue of thermal management: faster switching + fixed settle time = more current = more resistive heating.

However, someone else pointed out to me recently that this might be an issue with the RC time constant of the interconnects on the chip.

Ideally i'd like to find an article about this phenomenon from an EE/physics point of view, preferably from someone in the industry. Most preferable would be in a journal or IEEE publication; a trade magazine would be good too. An article in something like Wired would be okay, but I need to cite a source and the popular press is notoriously bad when it comes to this sort of thing.

However, my googlefu is really failing me here, so any explanation or pointer to search terms, or really anything of help would be greatly appreciated.

Thanks!
posted by sergeant sandwich to Computers & Internet (29 answers total) 16 users marked this as a favorite
 
The importance of clock speeds are marketing decision, not a technical one. Marketing needs some way to show the consumer that one proc is better than another, so they chose clock cycles.
posted by arnold at 4:51 PM on December 9, 2007


I don't think I can provide you with the technical answer you want, but I've spoken with some supercomputing pros in the academic field and they say that the physical limits of processors is what is keeping speeds from going into the 5-6 GHz range.

So, instead of going up, spread out with multiple-cores.
posted by skwillz at 4:57 PM on December 9, 2007


Searching for yonah, netburst, stages, etc. might help, eg this article on anandtech.
posted by panamax at 5:05 PM on December 9, 2007


Best answer: It's a subtle question, which I discovered when I sat down to bang out an answer and decided I needed to do some research first.

I will refer you first to the introductory slides [pdf] for a Stanford electrical engineering course on the topic. First look at slide 13, and you'll see that while clock speeds aren't going up as fast as they used to, they still are increasing steadily. Second, look at slide 16 for a good hint at why the rate of clock increase may have slowed.

Broadly speaking it is a power issue. Chips today run at the absolute limit of power density; they generate just as much heat as the package is able to dissipate, and if the package could do better, you can be damn sure they would increase the power budget. You could make the chips go faster, but you'd have more power to get rid of. It's a classic engineering tradeoff. Is your engineering effort better spent on creating clever low-power tricks throughout the chip so you can run faster, or on creating more parallel circuits so you can do more in a clock cycle, with a slower clock? Complicate the equation by the unfortunate fact that your successful marketing to date is based on clock speed=performance (whoops), and by competitive pressures which mean performance has a time value, so that what's fast today is slow in a year, and it's a rather messy issue.

Around 2000 the equation flipped as we hit the power wall. The design mentality of a speed race - keep increasing the clock speed - no longer makes sense when you need 10x the engineering effort to double the clock speed as you do to double performance through parallelism.

I should also mention that while the transistors themselves are not close to their fundamental speed limit, the microprocessors contain large functional units that must complete all of their calculations within one clock cycle, and making these functional units fast enough can be quite a challenge, another source of the extra engineering effort that comes with increasing clock speed (or decreasing cycle time if you prefer). Timing margins shrink, race conditions are harder to deal with, etc. AMD has recently had trouble with the Translation Lookaside Buffer, and it is reported that the bug occurs only at higher clock frequencies. The RC interconnect delay comes into play here as well; wires are slow enough that you have to pay attention to their delay and put repeaters everywhere, another source of extra design effort.

I will try to dig up some articles when I get to my desk tomorrow.
posted by PercussivePaul at 5:21 PM on December 9, 2007 [7 favorites]


One of the biggest things they are working on these days is not clock speed, but power efficiency. They're still getting faster, although not in the increments they were in the past. There's enough computing power out there right now that they can afford to do this.
posted by azpenguin at 5:22 PM on December 9, 2007


There is no "single cause" to the decrease in clock speeds in chips. There are quite a few compounding issues that have essentially stopped the increase in CPU clock frequencies.

1) Smaller die sizes do all sorts of weird things. I don't think ten years ago anyone would have seriously considered that optical lithography would take us down to (and below) the 45 nm technology node, which is actually below the wavelength of light. Leakage current (due to fun things like quantum tunneling) is making static power of a chip become more and more obvious. As a result, dynamic power needs to correspondingly decrease to keep chips from melting.
2) Designing at a faster clock is very expensive. CPU designers are used to using exotic architectures (domino logic, for instance) to account for faster clocks, but ASIC designers are not. Development for slower architectures (ie, the classic static CMOS) has become a bigger part of the pie than CPU architectures. Developments in the former tend to migrate to the latter.
3) It's rather difficult to get data off of devices at high data rates. Multigigabit serializers are taking over major data paths. Although this doesn't have an inherent impact on CPU speeds, it means that bus speeds are becoming less important.
4) Pipelining only gets you so far. It's already common to use a pipeline simply to account for propagation delays along a wire. There's no way to improve those propogation delays short of optical interconnects. We're already past the point of considering RC time constants of interconnects. It's more common to hear about characteristic impedance (traditionally an RF phenomenon) of interconnects.
5) Memory bandwidth hasn't kept up with CPU bandwidth. At the absolute far end (graphics DDR RAM), it is possible to get about 1 GHz out of memory. However, more commonly used PC memories are not nearly so fast. As a result, the net benefit of a higher clock frequency is mitigated by any direct memory interactions.
6) Faster CPUs require bigger caches (to ensure that number 5 doesn't become as big of an issue - you don't want a CPU to have to go out to memory to do anything when memory is several orders of magnitude slower than the CPU). Bigger caches are incredibly expensive in terms of die area. This is why server processors are quite a bit more expensive than consumer CPUs.

On preview, PercussivePaul has the right idea - it's simply cheaper to design for parallelism than for speed.
posted by saeculorum at 5:35 PM on December 9, 2007 [1 favorite]


Even more important than memory bandwidth (throughput) is memory latency. You can increase throughput by running more memory chips in parallel, but that doesn't help latency at all. Caching has a lot of the same drawbacks as pipelining: it lets you run really fast when things are going predictably, but a few cache misses or pipeline stalls can have a horrific effect on your overall performance.
posted by hattifattener at 6:15 PM on December 9, 2007


This article in American Scientist perfectly answers your question and then goes into some interesting stuff about parallel processing.
posted by Uncle Jimmy at 6:42 PM on December 9, 2007


And this article on ars technica says that research on a new three-dimensional design for the transistor "could boost processor clockspeeds north of 20GHz to a possible 50GHz."
posted by jholland at 6:48 PM on December 9, 2007


Everyone in the industry knew this would eventually happen, but no one was really sure when. It ended up happening earlier than expected -- I think the industry consensus was that it was going to top out at about 10 GHz, which is why Intel made a huge (and ultimately failed) bet on the "Netburst" architecture. It was designed to run optimally at about 6 GHZ, but they never got that high, and at low clock rates it made too many concessions.

They're up against physical limits: the speed of light, Planck's constant, the size of atoms, and a couple of others. One big problem is tunneling, quantum leakage. As transistors get smaller, and as you use smaller voltages, there's a greater and greater chance that an electron will jump from the source to the drain even when the FET is "off". The smaller the FET, the more of that you'll see.

You can prevent that by using higher voltages. If the hill is taller, the chance of an electron tunneling is lower. But if the voltage is higher, then it means you have to use more charge in the gate, which makes the switching time slower. But if you don't do that, eventually there comes a point where the quantum leakage approaches the level of a normal signal, and then you can't tell if the FET is "on" or "off".

Also, using a higher voltage means you use more power, and cooling is a real issue.

We haven't outright topped out yet; it's still possible to make more gains. But we're near the limit of what's possible with MOSFETs. And we're also near the limit of what we can buy with making the devices smaller. Right now it's down to the point where some insulating layers are less than 10 atoms thick. At a certain point when you're trying to get smaller, you start running into granularity issues, and we're near that point.

There are two alternate approaches which could conceivably yield vastly higher switching rates, but both are radically different than anything we're currently using. One is light gates. (I don't know what the official name of this is.) The other is Josephson Junctions. There has been research into both for decades, but neither is remotely close to being ready for prime time. (A "big" device for either right now is 50 gates. There's a non-trivial issue scaling them up, not to mention the weird operational environment needed for Josephson Junctions.)

However, a clock rate stall in MOSFET technology doesn't mean that processors will cease to increase in compute power. There's a lot that can be done in terms of architectural changes to increase compute power without requiring increased clock speeds. Increasing parallelism is the ticket, and that's why dual-core and quad-core processors are becoming more and more common. But there are other things, too.
posted by Steven C. Den Beste at 7:11 PM on December 9, 2007 [2 favorites]


Its about power efficiency. Power usage skyrockets iin the upper GHZ
posted by mphuie at 10:04 PM on December 9, 2007


Best answer: Sargent sandwich, the geometric phenomena you are referring to is described here. Essentially, the problem is that as features on a chip are scaled down, the resistance increases inversely to the width of the interconnect wire. That is, the resistance goes up as the wire gets narrower. The capacitance of the wire decreases proportionally to the width of the wire. That is, the decreasing width decreases the capacitance. When you combine them as R * C, one increasing and the other decreasing proportionally, it means that the RC delay remains constant for a given length of wire even as you shrink it. The RC delay determines the time it takes a signal to propagate from one end of the wire to the other. What this all means is that while shrinking features tends to speed everything else up, for example transistor switching delays, the propagation delay across the connecting wires does not speed up.

So as you scale to smaller and smaller features, the dominant performance effect becomes the speed of interconnect wires, not the speed of transistors. The speed of the transistors becomes irrelevant. Your limitation is the interconnect. As you speed up the clock, the distance a signal can travel and the number of transistors you can reach in one clock period goes down.

One thing you can do about it is to make fatter wires, but that becomes self-defeating as the wires take up more die space and cause the transistors to be farther apart, increasing interconnect distances. Another solution is the use of low-K materials for the insulators in place of silicon oxide. Low-K materials have a lower dielectric constant which reduces the capacitance. But ultimately, you reach a limit at which you can clock a chip and still get signals from one edge to the other in one clock cycle.
posted by JackFlash at 10:28 PM on December 9, 2007


The Intel Netburst architecture had a very long pipeline, and a couple of pipeline steps existed for no reason except to provide time for signals to propagate across the chip, because there was no way to arrange the chip layout to keep everything near that needed to be near.

Typically signals in this environment propagate at about 80-90% of the speed of light. At 4 gigahertz, light travels 7.5 centimeters in one clock cycle, or about 3 inches.

At 3 gigahertz, they didn't need that delay. But the Netburst architecture was designed to run at something like 7 gigahertz, on the assumption that eventually fabrication techniques would make that possible. At those speeds you have to pull tricks like that.

But you pay a price for it. The longer the pipeline, the greater the hit on a branch prediction failure. Branch prediction is a black art, and generally they're very good at it, but it isn't possible to be perfect. The Netburst architecture paid a particularly stiff price for a branch prediction miss.

Intel's current generation is, ironically, a step backwards. It's a redesign/refresh of an older architecture, and the pipeline is shorter. Their design facility in Haifa pulled a modern miracle in cleaning it up, and that's why the Core 2 kicks ass. But the Core 2 can't scale to 5 or 6 GHZ; it simply cannot be pushed that fast.

However, since it seems no one can get a process to run that fast anyway, or at least won't do so any time soon, that's hardly a problem.
posted by Steven C. Den Beste at 11:58 PM on December 9, 2007 [1 favorite]


People are talking about architectural differences, but SCDB's got the right idea -- it's a quantum-mechanics-level limitation of the actual silicon. We have plenty of researchers capable of addressing the architectural issues, but they're not being funded because of known physical limits. If you're wondering why we can have 9 GHz ASICs but not equally fast CPUs, it's because CPUs require quite a bit of silicon real estate -- all on the same clock (mostly the cache). It's just too long a distance for signals to propagate (which, as an aside, is just one of the physical limits -- the other is transistor switching).

I disagree with the power being the current limit. My theory (and I don't have anything to back this up) is that power just became the new easily quantifiable marketing point. If you can't sell higher gigahertz machines, you sell lower wattage ones and convince your customers that it's the most important decision they make when buying servers. It's not a necessarily bad concept, just a different direction.

As far as other types of transistors, it takes a VERY long time for those to hit the market. You would need to change every single chain in the CAD toolset -- synthesis models need to be made and tested, experiments of all sorts need to be done, manufacturing processes changed, etc. It's not remotely trivial.
posted by spiderskull at 11:58 PM on December 9, 2007


Typically signals in this environment propagate at about 80-90% of the speed of light. At 4 gigahertz, light travels 7.5 centimeters in one clock cycle, or about 3 inches.

On-chip signals are not governed by the speed of propagation. Obviously chips are much, much less than 3 inches across. The delay of signals on-chip is determined by RC constant, a function of resistance and capacitance, not the speed of light. The RC delay determines how long it takes to charge up a wire to raise the voltage level to the threshold of detection by the destination device.
posted by JackFlash at 12:28 AM on December 10, 2007


It's true that the barrier to switching from MOSFETs to some other technology is extremely steep.

Another problem operationally is atomic drift. (Again, I don't know what the proper term for it is.) When you have different conductors at an intersection, and push current through it, there's a tendency for the atoms to move inside the crystalline structure. It isn't necessary for it to be molten; this can happen at any temperature. At a conductive junction between aluminum and copper (for instance) over a period of time the interface gets a bit blurred, with aluminum and copper atoms intermixed. I've heard of this kind of thing resulting in welding contacts together, so that they had to be cut apart.

But most of the time that effect can be ignored, because the atoms don't move very far. Unfortunately, when you're talking about junction layers that may only be 15 atoms thick, or even less, it's no longer a negligible effect. In the case of silicon chips, one problem is doping atoms moving, which means the effective doping level changes over time, maybe enough so that the chip ceases to work properly. It means the effective lifetime of the chip may be shorter, and if you get it wrong it may be too short to be commercially acceptable.

Low current helps that; the amount of drift is a function of current. But it's still a concern.

As geometries get smaller, crosstalk becomes an increasing concern. Crosstalk is caused by capacitive or inductive coupling of neighboring conductors, and it increases the noise level, which makes your signals less clean. That's a particular problem with MOSFETs because they have preposterously huge gammas, so it doesn't take much parasitically induced current at all to make a FET switch on when you don't want it to.
posted by Steven C. Den Beste at 12:30 AM on December 10, 2007


it's a quantum-mechanics-level limitation of the actual silicon.

No, it's not quantum mechanics. It's elementary circuit analysis -- simple RC.

as an aside, is just one of the physical limits -- the other is transistor switching

As I pointed out above, transistor switching is not the dominant factor. RC delay swamps out gate delay at smaller geometries. The longer the wire, the longer it takes to charge charge and discharge.
posted by JackFlash at 12:37 AM on December 10, 2007


The delay of signals on-chip is determined by RC constant, a function of resistance and capacitance, not the speed of light.

JackFlash, I knew that signal propagation was not a function of the speed of light as such.

But the speed of propagation cannot be greater than C. That violates Special Relativity. So showing how far light travels in a clock rate is a good way intuitively to get a feel for just how ridiculously fast the clock rate is.

Grace Hopper used to give away nanoseconds at her lectures. She came to Tektronix while I worked there, but I didn't hear about it until after the fact. Some people I worked with who did attend her lecture proudly hung their nanoseconds on their benches. (I was envious.)

A Grace Hopper "nanosecond" was a piece of wire about 10 inches long. That's how far a signal propagated through that wire in one clock cycle at 1 GHz.

Laymen are used to thinking of the speed of light as being "so fast it's effectively infinite". Technical people know better, especially in electronics. We run up against the speed of light all the time. (We had serious problems with it doing CDMA, too. Signal latency on the radio link was really substantial and a pain in the tail to compensate for.)

And it's amazing but true that the speed of light is a substantial barrier to making chips go fast, despite the fact that the chips are so tiny.
posted by Steven C. Den Beste at 12:38 AM on December 10, 2007


as an aside, is just one of the physical limits -- the other is transistor switching

Actually, the gate capacitance of MOSFETs now is so absurdly tiny that the FETs can change states in picoseconds. The limiting factor is how long it takes to pump enough charge into (or suck it out of) the gate so that it produces (or cease to produce) enough of an electric field to change the behavior of the junction.

It's another case of granularity. It's almost to the point that you can count the number of individual electrons that have to be moved in order to make the FET switch.

(I just noticed that Wikipedia has a discussion of the problems of MOSFET scaling.)
posted by Steven C. Den Beste at 12:50 AM on December 10, 2007


Best answer: I agree with everything saecolurum and SCDB have said. saecolorum's take is more practical and more indicative of the real types of problems that crop of when you try to increase the clock speed. SCDB is describing some of the second-order effects that we have to worry about now, but didn't before, which are at the root the reasons why things have changed.

I want to share some more slides from that Stanford course: Wires lecture. (The authors of this course, Mark Horowitz and Ron Ho, are leaders in this field. Both have high academic profiles and very strong industry links.)

SCDB:
Recent advances in process technology have switched the thin insulating layers with thicker hi-K metal-ish layers. I don't know what's in the layer (it's proprietary) but it gives you the same effect with better thickness. See here.

The 'current drift' problem is known as "electromigration" in the industry.

What SCDB says about transistor switching is true. A min-sized inverter driving another min-sized inverter takes about 15ps to switch (simulated) in the 65nm process I have available to me.
posted by PercussivePaul at 2:36 AM on December 10, 2007


Best answer: I actually studied under Horowitz (mentioned by PercussivePaul above) and Hennessey designing high performance CPUs. All of the factors mentioned above -- power density, electro-migration, crosstalk, etc are problems for scaling (and Moore's Law). In spite of all the difficulties cited above, designers are still managing to scale down feature sizes. But the OP's question wasn't about the difficulties of scaling, but about the apparent plateau in clock speed. The anomaly prompting his question is why clock speeds haven't gone up even though scaling has continued, albeit with difficulty.

Scaling per se is not primarily just about clock speed. It is about reducing cost -- the number of transistors per square millimeter. For a while, clock speed was a pleasant side effect of shrinking feature size. Smaller transistors are faster because they have shorter gate lengths and smaller gate capacitances. But as you get smaller, transistor speed is no longer the issue. The limiting factor is RC wire delay connecting those transistors. You simply can't push the clock higher and get across the chip in one clock cycle. Short wires are not a problem since they get shorter as you scale. Long wires are a big problem. They don't get faster -- they get slower. So instead of directly connecting more and more transistors, you have to break functions into smaller chunks which is why they are putting two or four CPU cores on each die. The transistors are almost free compared to the interconnect. You can throw lots of transistors at a problem doing things like speculative calculation even if you often just throw away the results if they aren't needed. Parallelism is very hard and often not very efficient. Sometimes big chunks of silicon stall waiting for results from other pipelines, but even if it isn't very efficient, it is becoming very cheap because of the number of transistors available. The bottom line is that clock speed is starting to taper off because of the RC problem on silicon. Performance is now primarily being gained by parallelism using lots of transistors rather than faster clocks.

One way to look at this is the power of the human brain. The switches in the brain operate on the scale of milliseconds, more than 1000 times slower than the switches in silicon, yet the brain makes up for it because of its massive parallelism.
posted by JackFlash at 11:49 AM on December 10, 2007


No-one seems to have mentioned the leakage current problems upthread, so I'll chuck that into the mix as well: Up until very recently, process shrinks were effectively a free-lunch for chip performance: you got faster transistors that needed less power and you could pack more of them into the same chip area. Win!

This all fell apart in the 180->130 micrometer transistion (and beyond) IIRC; at this point the power loss due to leakage across the transistor junction became a significant problem, and it only got worse the smaller you make the components. At the same time, CPUs had pretty much hit the limit of the heat that could be dumped via forced air cooling. Suddenly, chip designers had to start making tradeoffs: fast, small, efficient; pick any two. With the total power budget ceiling set by the cooling solutions that the market would accept (combined with probable resistance to the size of the electricity bill if more power hungry CPUs had been on offer), CPU designers had to start making tradeoffs that had never been necessary before.

There's an article on realworldtech which goes into a little bit more detail (see part 2). I'm sure there are other articles around if you go digging.
posted by pharm at 12:20 PM on December 10, 2007


FWIW, people say they get around the increase in clock speed by putting multiple cores in the same processor unit. But that's not really an even trade. Parallel programs are enormously difficult to create reliably. Pretty much only Microsoft will be able to take advantage of multiple cores, and that's because they don't care if their programs work. Everybody else will find parallel programming very difficult, frustrating and fraught with errors.
posted by vilcxjo_BLANKA at 12:41 PM on December 10, 2007


That's a horribly oversimplified statement, vilcxjo_BLANKA. ASIC and FPGA designers have worked with parallel processing for 25 years.
posted by saeculorum at 1:08 PM on December 10, 2007


Response by poster: thanks all, especially jackflash and percussivepaul for their links.

i am working on designing optical interconnects and my officemate is working on low-k dielectric synthesis, so i appreciate some of the nods to these directions. but we are scientists, not engineers/VLSI/semiconductor people, and in academia, so we are somewhat insulated (ha) from the realities of industry.

anyway, it sounds like there is no one answer (though 45nm process dimensions are still too large for quantum size effects to show up - the de broglie wavelength of electrons in Si is about 10nm - the conduction band is still fairly continuous at that size. i am sure the real problems are more prosaic than that.)

however, one of the links upthread pointed me to the ITRS reports, which is exactly the sort of report i'm looking for. so, problem solved. thanks again!

p.s. jackflash - i loved your boss (hennessey)'s book. kick ass! that is all.
posted by sergeant sandwich at 2:22 PM on December 10, 2007


Best answer: Now that I re-read the original question, I see that the OP wanted sources to cite. I did a quick look around the webs and here's what I came up with.

From Dr Dobbs journal, a software-oriented industry publication:
The free lunch is over: A fundamental turn toward concurrency in software [pdf]
It has become harder and harder to exploit higher clock speeds due to not just one but several physical issues, notably heat (too much of it and too hard to dissipate), power consumption (too high), and current leakage problems
(doesn't cite sources... take with a grain of salt)

From an IBM presentation [pdf] at ISSCC about the Cell processor:
Mentions three "walls" that challenge performance: the memory wall (penalties due to cache misses), the frequency wall (related to pipeline segment size - all calculations in one segment must complete within a clock cycle, some segments have a lot of work to do and it can be tough to do in one cycle), and the power wall.

From a paper in IEEE Micro [subscription required, try a university library]:
As the on-chip clock frequency reaches 4 GHz, it becomes difficult to dissipate the switching and leakage power losses. In addition, merely increasing clock speed can no longer reduce processing time significantly because of memory latency and other pipeline bottlenecks. Thus, there are very few incentives to running at 4 GHz using the current process technology.
Also mentions the three walls. There is a lot of discussion here about a complicated issue. I think this is the source you need. It's a peer-reviewed publication, and well-known author, you can trust it.

From that article, a citation from Intel that they had "hit the power wall" with traditional clock-speed scaling.
posted by PercussivePaul at 2:26 PM on December 10, 2007


And on non-preview: The ITRS reports are a great source, but they are strictly from the process end and won't tell you much about microprocessor architecture, which is what you need to think about if you are interested in clock speed. I think they will be informative but the IEEE Micro article is a better source.
posted by PercussivePaul at 2:33 PM on December 10, 2007


Response by poster: paul, the IEEE micro article does look perfect. thanks again.
posted by sergeant sandwich at 2:34 PM on December 10, 2007


I just stumbled across a relevant article while researching something else.

There is a good deal of variation from chip to chip due to various random influences that are an unavoidable part of the fabrication process. These variations have been getting worse with each advancement in technology. This variation affects the speed and power consumption of transistors and wires so it is of critical significance. There is some safety margin built into the chip, but essentially if you design a chip to run at 2 Ghz, some of your output will run at 2, but some will run at 2.2, and some will run at 1.5. Chips are tested and sorted based on their best operating frequency and sold as different products for different prices. This process is known as "binning".

Now, this post analyzes the leakage current of various AMD chips. There is a direct correlation with leakage and speed, strongly implying that those chips which (randomly) wind up with lower leakage currents are the chips that are sold as high-end, operating at the fastest frequencies. Those which randomly get high leakage (30 amps, and by the way, holy shit that's a lot of amps!) are "downbinned", set at a slower operating frequency so that the chip stays within the power spec, and sold as a cheaper part. (note that there is some debate on that forum about the validity of this analysis.)

And here's a really interesting article about binning which says that chips that come out faster than spec, which in the old days would be sold as high-performance chips for big bucks, now get thrown out because running at this high speed pushes them past the power spec and puts them in danger of burning up while in operation.
posted by PercussivePaul at 5:43 PM on December 12, 2007


« Older Wireless Than Perfect   |   Will my ridiculously crushed fingers heal on their... Newer »
This thread is closed to new comments.