Why does computer chip process size have to keep getting smaller?
November 1, 2006 1:21 AM   Subscribe

Computer chip process size: 130 nm -> 90 nm -> 65 nm. Why do we need to keep making it smaller? Why can't we just make the chips bigger?

Every time we shift to a new crazy level of nanometeromification, the chip companies have to battle through a barrage of technical obstacles in design and production, leakage, heat production, power consumption... And when they crack it, they always say "With this new process, we can fit more processing power into a chip!"

Instead of all this hard work, why can't we just increase the physical area of the whole chip, and put more transistors on it? It's not as if your CPU die is taking up a lot of physical space in your average computer system.

My (declining) Physicsy intution tells me that it might be something to do with the sheer distance that you can get signals to travel around the chip within the timeframe of one clock tick. But I haven't been able to find any material to back up this idea, or to blow it out of the water with a better explanation. So what's going on?
posted by chrismear to Technology (16 answers total) 3 users marked this as a favorite
 
At the speed of modern processors, size greatly affects timing.

The speed of light is 299,792,458 m/s
If my math is right, at 4GHz, light gets to travel roughly 7cm in a single clock tick.
Electrons move significantly slower than the speed of light when they're moving through something.
posted by krisjohn at 1:32 AM on November 1, 2006


I'm not a semiconductor engineer, but my understanding is that there are a number of reasons. First, if you can make the transistors smaller, you can put more transistors on a wafer, potentially reducing the cost per transistor. Also, smaller transistors use less power, which is quite an important issue these days. Also, I believe smaller transitors switch faster, which is partly responsible for clock speed increases. Finally, as the previous post mentioned, propogation delays are smaller if the chip is smaller.
posted by cameldrv at 2:20 AM on November 1, 2006


My understanding was that the smaller die processes created less heat and allowed processors to scale further.

I can't source this appropriately at the moment, though, but leakage is a significant problem that must be overcome at every decreasing size, as mentioned in the wiki on 65nm.
posted by disillusioned at 2:20 AM on November 1, 2006


It's very expensive to make large chips, partly because a greater area of silicon costs more, and partly because a single flaw means you have to throw out a larger area.
posted by hoverboards don't work on water at 2:37 AM on November 1, 2006


Best answer: To summarize the technical mumbling in the Wikipedia article on MOSFETs (Metal Oxide Field-Effect Transistors):
  • If the transistors are smaller, there is less resistance when they are off. Resistance = heat.
  • There is also less capacitance, which means the transistors can switch faster and more efficiently.
  • Smaller chips means more can be produced per wafer; silicon semiconductor wafers of the sort that Intel uses can be 12" across (they're circular) and they're cut from huge logs of semiconducting material and end up costing a few thousand dollars each.
So, small transistors are less hot, more efficient, faster, and cheaper. It's good for everyone. Also, when you have less transistors, less of them fail. On a typical wafer (image), a number of the chips will be entirely useless, and the manufacturer generally has no way of knowing how fast each chip will be. A 4 GHz chip and a 3.5 GHz can be right beside each other on the wafer; they're tested to see how fast they can go, the speed is stamped on them, and they get put in a package. It's all very handwavy, like a lot of the really esoteric bits of computing once you really get down to the details. The number of good chips each company actually gets off of each wafer (the "yield") is something highly talked about but never officially revealed by either Intel or AMD.

Intel would like to be running their 45nm process by Q4 2007, and there are generally plans bubbling around to get down to the ~20nm level by 2011 or so. Hardware is vastly outpacing software. Software is my job, and it is depressing how much all of it sucks compared to the stuff it's running on.
posted by blacklite at 2:42 AM on November 1, 2006 [1 favorite]


One could argue that the current trend towards multiprocessor machines is a step towards "more silicon area per PC".
posted by Jimbob at 4:09 AM on November 1, 2006


Best answer: hoverboardsdontworkonwater has the major reason identified... substrate flaws reduce yields and economic/physics considerations account for most of the rest.

A larger IC takes EXACTLY the same number of processing steps as a smaller one... Each step adds to the cost. At then end of the process, the cost of each chip in the wafer has 1/Nth of the cost, where N=number of good chips. Bigger N=lower cost. Bigger waver=bigger N. Smaller chips = bigger N. Higher yield=bigger N. Lower wafer defect count = bigger N.

Performance issues including propagation delays and switching speeds are influences by feature size, too. CMOS chips consume most of their power during transitions between states, placing a premium on switching speeds.... i.e., get through the transition ASAP. For a variety of structural reasons, smaller and closer = faster. The tradeoff is power at this level, though.

Fun stuff, huh?
posted by FauxScot at 5:02 AM on November 1, 2006


Technically, it's multi-core machines, not multi-processor: more transistors, less silicon.
posted by unmake at 5:07 AM on November 1, 2006


Also I wanted to add that depending on the application you might just go with a larger chip. The world of PCs will be ruled by the newest smaller chips but in the world of embedded electronics you might go with an older/weaker/bigger chip. In fact one of the recent mars missions used an older 486 intel chip.
posted by damn dirty ape at 7:12 AM on November 1, 2006


[T]he chip companies have to battle through a barrage of technical obstacles in design and production, leakage, heat production, power consumption

Smaller dies actually consume less energy and therefore produce less heat. However, quantum effects take over at these smaller scales, which is the main technical hurdle to overcome.
posted by Blazecock Pileon at 7:43 AM on November 1, 2006


90nm was the major stumbling block, as I recall; 65nm wasn't nearly as big a deal.
posted by kindall at 8:15 AM on November 1, 2006


quantum effects take over at these smaller scales

Quantum effects are the whole reason transistors work in the first place. :)

90nm was the major stumbling block, as I recall; 65nm wasn't nearly as big a deal.

I think it is better to think of the stumbling block is clock speed. Going from 130nm to 90nm didn't allow them to increase clock speed the way they thought it would, but going from 90nm to 65nm hasn't allowed a clock speed increase either. They haven't passed the stumbling block, they just stopped aiming for >4GHz chips.
posted by Chuckles at 9:05 AM on November 1, 2006


Best answer: 90nm was a huge problem due to so-called 'leakage' currents. these chips consumed more idle power than chips with larger Leff, and for a while no one was really sure if there was a solution to this problem.

so it's not always the case that smaller = less power. power is always a real concern, as is propagation delay (this is why the pentium 4 is so heavily pipelined - signals only have to travel between two stages of a pipeline in one clock cycle - but as we found out the p4 was way too deeply pipelined and performance sucked). as hoverboards points out, the real issue in the semiconductor industry is yield. this is always a closely guarded secret, since its directly tied to die cost, which is directly tied to profit.
posted by joeblough at 9:07 AM on November 1, 2006


Okay, so my saying 'clock speed' is a bit simplistic, because leakage losses are static losses. However, leakage is made worse by things you do to increase clock speed. Or, maybe it is better to turn that around: measures which reduce leakage cause reduced clock speeds. This Altera article is interesting, and doesn't require too much background - Stratix II 90-nm Silicon Power Optimization.
posted by Chuckles at 9:47 AM on November 1, 2006


One could argue that the current trend towards multiprocessor machines is a step towards "more silicon area per PC".

This isn't strictly true. There are certainly multi-processor machines on the market, but we're seeing more multi-core machines instead. Keeping both cores on the same die makes the generally challenging problems associated with multiple processors (particularly caching) somewhat more tractable. Throwing more distinct processors at a problem doesn't necessarily make it run faster, which is why multi-core processors (IBM's Cell process is the extreme example of this) are a current focus.

And as remarked above, I just want to add to the chorus of voices discussing yield as the primary reason we don't make bigger chips. That's how it was always explained to me - there is some chance of a bad bit of silicon occuring in the wafer that's independent of process size (though they try and push that probability lower through better wafer production methods) so there's a point at which you just can't make the chip size any larger before it destroys your yield.

This is kind of a tangent, but I love that in the Core 2 Duo ads that Intel is showing these days they actually show an image of the die. You can totally see the two cores and the two caches very clearly. I think it's really exciting that marketing as moved from the black-box attitude of earlier Intel marketing to show people images of processors. I know it doesn't really mean anything to the public, but I feel like it's a step towards making processors a little less magical.
posted by heresiarch at 2:48 PM on November 1, 2006


also didnt the first P6 (pentium pro) processors have a hologram representing the chip on the outside of the cartridge?
posted by joeblough at 10:52 PM on November 1, 2006


« Older Trial by, no for, media   |   Do I choose belief or does belief choose me? Newer »
This thread is closed to new comments.