3 GHz is slow?
February 13, 2006 6:26 AM   Subscribe

How can a 3 GHz PC be "slow"?

The first PC I ever used at work was one of the old 4.7 MHz dual-floppy IBM XTs back when they were new. When we got one of the new ATs in, we marvelled at how the "Baby" game , written with the XT in mind, was almost unplayably fast on the new machine's 8 MHz 286 processor.

Flash forward twenty years and I'm sitting in office with some co-workers trying to help one of the tech-challenged among us select a new home PC. One of the comments is something like "Stay away from the Celeron's, their slooowwww...." Now this is a computer that is nearly 1000 times as fast as the first XTs, and its slow?!? I guess I'm showing my age, but I'm shocked and offended to even consider that that is so.

The real question is why is this so? I suppose the easy part of the answer is that the software that is being written today is doing so much more than the early PC software was what with bitmapped displays, persistant network connectivity, multi-tasking, background processing, etc. But I can't imagine that that really explains why as machines keep getting faster, they seem just as fast, or slow, as ever from a user perspective.

How is the method of the construction of the software related to this? Each successive iteration of nearly every product I can think of is at least an order of magnitude larger than the last. Windows 3.1 was on 7 or 8 floppies, Windows 95 on a CD, XP on DVD(?)... The underlying instruction sets haven't grown at that rate. Is this just all about inefficiently written or generated software?
posted by hwestiii to Technology (26 answers total)
 
For starters, there is/was some truth to that whole "GHz myth" that Apple pushed with their processors. Intel made some design decisions with their Pentium 4 chip to get the clock speed up that makes instructions take longer to execute, for example, such that an AMD chip running at half the clock speed is faster than the P4. So it's misleading to compare clock speed between processors.

There's also the issue of Moore's law. Processor speeds have climbed so rapidly that developers have gotten used to a newer, faster processer every year, and have designed their software accordingly. Likewise, development methodologies and languages have moved more towards higher levels of abstraction between the application and hardware. This has the benefit of allowing faster development and developement of more complex systems, with a tradeoff in effeciency.

Then there's all the tasks that we expect from an OS today, like networking and pretty GUIs and managing a myriad of background tasks while providing a responsive user experience.

It all adds up, to say the least.
posted by kableh at 6:36 AM on February 13, 2006


When someone says "Stay away from this, it's slow" they really mean "Stay away from this, it's slow compared to the other things you can get for the same money", or in the case of the Celeron specifically they might mean "Stay away from this, it's slower than a non-Celeron-based PC of the same clock speed". It has nothing to do with the speed of any of those compared to older computers.
posted by mendel at 6:38 AM on February 13, 2006


Most people don't buy software because it's small or fast. They buy it for new features. This inevitably leads to software that takes up as much space as it could ever need, that is just fast enough to not be annoying, and has tons of features that you don't need.
posted by smackfu at 6:41 AM on February 13, 2006


The reason you're being told it's slow has nothing to do with the "3GHz" and everything to do with the "Celeron"; Celerons and Pentiums use the same core, but the Celerons have less L2 cache and lower bus speeds. Which means that if you put two computers side-by-side, with identical configurations except that one has a 3GHz Celeron and the other has a 3GHz Pentium 4, the Pentium 4 will be faster.
posted by ubernostrum at 6:43 AM on February 13, 2006


Response by poster: All good points. But what is contributing to the growth in the size of the software? That has to be part of the answer here.

For instance, a couple weeks ago, I was looking at a product called "Easy Receipts", a USB personal scanner that will scan restaraunt receipts, etc., run them through some OCR and intellegently populate your expense reports with the result. The thing wanted 1 GB of disk space for its install foot print. I had a Win95 version of OmniPage that wanted 33 MB to install.

As much smarter as the new application is, can that account for being 30 times larger, relatively speaking?
posted by hwestiii at 6:51 AM on February 13, 2006


There are lots of reasons why this would be so. The most prevalent reasons are software bloat and poorly designed machines.

Part of the problem is the vast quantity of background processes being run. Virus scanners. Quicktime taskbar thingy. winamp taskbar thingy. aol. msn. realmedia. Just everyone wants to install some stupid taskbar widget. A brand new machine from Dell or whomever will come with somewhere around 150 megs of resident software that the novice doesn't know how (or if) to get rid of. If you consider that lots of machines are still shipping with 256 megs of ram, then that's not a whole lot of RAM to work with. I honestly can't believe they ship modern computers with anything less than 512M. It's criminal.

If the same machine was shipped with 512M or 1G of RAM it would, in the end, be a lot zippier. But, there's also the problem of perception. Most people will turn on a computer and expect to be able to use it as soon as the desktop is available. In truth, the desktop is made available early for appearances sake, and your computer is mostly likely nowhere near ready when it pops up. On my home machine, I need to wait an additional 30 seconds before actually trying to do anything. If don't wait, then things start slow, and will sometimes have a tendency to stay slow for a while. But, if you just give it a chance to finish booting, then it will be much more responsive.

There's also the fact that most people just don't know how to maintain their computers. I can't count the number of computers I've sat down at to find 20 things in the system tray, 15 of which need not be running. If people knew how to maintain their computers - those things would be gone. People don't run virus or ad scanners, so they end up with 95% of their CPU being devoted to calculating the topography of a nigerian email scammer's navel, which is then sent all around the world via outlook worms.

But it really all does come down to software: I have an old toshiba laptop which came with win98. With win98 it ran fine, played mp3s, even some movies. It certainly handled office and webstuff just fine. Thing has 128 megs of ram. On a whim, I installed XP on it. Just about the thing it can do now is check email, and it's slow on that front. XP needs a minimum of 128M just to run. Whereas win95 ran on 32megs just fine.

It's not really true, what your friend said about celerons, though. They can be doggish in some high-requirement video games or with say, 3dstudio, but in general they are fine. At least, in my experience.

I realize that I'm kind of rambling. I have no closing point, I just hope that I helped shed some light.
posted by jaded at 6:58 AM on February 13, 2006


Let's explore your "successive iteration" concept first:

Think about it this way. Microsoft has to develop their operating system to be able to work—out of the box—with THOUSANDS of pieces of hardware. The List. (Roughly speaking.)

Plug 'n Play is a wonderful tool that allows hardware to tell the operating system what it is, exactly, and (occasionally) where it might even look for drivers, so to speak. Lots and lots of those drivers are included in the Windows install, or at least on the Windows discs, although many hardware manufacturers are moving towards their own massive driver/support software installs as they've been doing for years. (My newest HP printer WILL NOT allow you to use it without installing their software first, even over the network. I couldn't believe this.)

XP is on CD, but Vista will be on DVD. Each new version of Windows includes many new elements. Besides a new user interface (which itself takes up very little space), Vista, for instance, will include an entirely new underlying graphics system. This system allows the entire interface display to be offloaded onto the video card, using Microsoft's own driver-interface stack. This allows them to strike up all these fancy effects without taxing your performance like the sweet fading menus of Win2k used to on older machines. This is one relatively small facet of the new operating system, but imagine how much code is required to build a driver set and software development kit that allows manufacturers to build hardware compatible with the new system AND allow software developers to take advantage of the new system, while maintaining backwards compatibility with non-DirectX 9/10 video cards.

Phew.

That's JUST the UI advancements for JUST Vista. And it's a helluva lot of code.

Part of the problem is that, indeed, people have gotten sloppy. It's a lot easier for some programmers to write something in 16,000 lines of code than 12,000 lines of code. Because of how large hard drives and memory are getting to be, it's not that big of a deal, either, when an application takes 20 MB of space. Good developers are mindful of this, but it isn't to say that sloppiness isn't contributing.

You really do answer your own question in a few ways. Network activity? We need some software to interface with the TCP/IP stack. For maximum compatibility, we'll compile our new software with whatever other libraries we need for it to work fantastically with as many machines as we can get it on.

Multi-tasking? Absolutely. When you launch a new program, and it's process priority is set to Normal, it'll tend to spike your CPU usage, set to full-throttle, to get the task done as soon as possible. A BIG part of the "slowness" bottleneck nowadays is your hard drive.

I have an AMD 64 x2 4400+ processor. The x2 means that there are two cores on the single chip; effectively two separate processors. One can be maxed out while the other provides me with non-stop solitaire, and it's absolutely fantastic. However, I only have one hard drive. And loading Battlefield 2 will still take me five minutes, because it needs to go grab 800 MB of a map off my (now-defragmented) drive and put it into RAM.

Guess what happens when you don't have enough memory? It goes into virtual memory. (Read: back to your hard drive.) This is ridiculously slow, orders of magnitude slower than your RAM. So now when you try to pull open an application you had minimized for sometime, that Windows thought you wouldn't be using for awhile, it has to go find it in your page file on your hard drive, clear some space in the RAM, and load'er up.

Look at the scale of the new applications we're using. I heard that Office 97 had 250 features. Office 2003 had 2,500. Besides the massive usability problems this presents, (which the next version of Office deals with pretty nicely, actually) it also represents a whole host of code and supporting libraries and artwork for each new function. And just because you don't use the automatic bibliography generator (which I just discovered, 5 years too late) doesn't mean half the college and high school kids using the software won't. That said, the desire to give the people what they desire while also balancing the size and bloat of an application is remarkably difficult.

There are more than 200 million lines of code behind Windows Vista. Features, usability, drivers, functionality, help & other documentation, indexing, memory and application management, stability, networking, printing, and a thousand other things all go into that code. They're not all invoked when you launch your system, but your hard drive can only move so quickly.

Install your OS to your RAM. Install your applications to your RAM. Watch how fast everything moves. I'm of the impression that one of the biggest bottlenecks anyone can have is their hard drive. Moreso if it's heavily fragmented. Making your poor drive (which is growing larger and larger as technology advances) try to track down every random piece of data stored all over the platter is miserable. New drives are getting faster, and some old (SCSI, for example) drives are the fastest, but it's still rough.

This doesn't even begin to bring into the fray the spread of Spyware, virii and other problems. The average user is stupid enough to click the things that cause them problems. These beasts will run in the background, consume resources, steal network bandwidth, send the hard drive doing crazy things and fill up your page file. So no wonder that a person can't launch Word in a timely manner, when they're running some evil Bonzi buddy mixed with some other random virus that's decided to dDOS some random web site.

The instruction sets don't need to increase for software developers to take advantage of the advances that have come with faster CPUs and GPUs. Memory and hard drives are a lot to blame for "slow" systems. Because, you'll notice that oftentimes, once your application has launched, it's launched, and you're fast.

And by the way, Celeron's ARE slow, in comparison, because they significantly undercut the amount of what's called L2 cache—memory that exists on the processor die itself.

So while there may be more instruction sets, the processor isn't able to store and queue as much data as it needs, and it needs to go back to the RAM to see what's next. This takes a GREAT deal of time, relatively speaking.

To use the fancy-free analogy I've heard, L1 cache is sugar in your hands. L2 cache is sugar in your pantry. RAM is sugar from your neighbor. Your hard disk is sugar from the Albertson's down the street. It's all a manner of delivering the "what's next" to your process and then the meat of that equation. Once it has it in its grasp, it's usually quite smooth sailing.

Concluding, there are a lot of reasons things go "slow." I'm still surprised on my 2GB RAM, 2.2GHz dual core system why Firefox will sit for 3-5 seconds before deciding to just launch for me. Then I remember that my hard drive is sifting through 250 GB of data, grabbing what's relevant and loading it into memory while processing what it needs to in order to spit it out to me. Vista will include some marked improvements in prefetching and preloading frequently used applications that will take a lot of this wait down, and it'll hopefully be smart about it. (XP does this to some capacity right now, as well.)

Keep in mind there are plenty other reasons. Memory leaks, MORE adware/spyware/virus problems, simply poor programming (bugs, errors, incompatibilities, poor beta testing, poor resolution of known issues, plain poorly written code that does things in 50 lines of code and 3 libraries that could be completed in 2 lines.)

Everyone's just banking that the next and next generation of computer will be fast enough to make it difficult to notice.

(More on Vista's development process. Oh, note that it's a self-link.)
posted by disillusioned at 7:06 AM on February 13, 2006


On an old XT PC the software might have taken up 20k of space. A single programmer can realistically "know" every part of that software. Todays programs take hundreds of megabytes, no single individual can understand all the details of a single piece of software, so various frameworks and code-generation engines are used. Standardized auto-generated code becomes very wordy, because it is designed to work with a million different variables. Hand-tuning and optimizing is no longer realistic or possible.
It's a bit like the space program, if you really wanted to get someone into space, you could just use a really big catapult, but if you want to get a group of people into space, take a payload, perform science experiments and then get them all back to earth safely, you're going to need a hideously complex machine with thousands and thousands of people to support it, maintain it, and keep it working. You need redundant systems, and special parts that are never used, except in times of emergency or in extreme circumstances. Software is the same.
posted by blue_beetle at 8:15 AM on February 13, 2006


But what is contributing to the growth in the size of the software?

Software is a gas. It expands to fill the available space. A metaphor, to be sure, but no less true for that.
posted by kindall at 8:17 AM on February 13, 2006


I wouldn't confuse disk-space size of a program with how fast it will run. I had a few crude graphic art programs that were large in total disk space than a screensaver called Razzle Dazzle back on my old 8088 IBM-Compatible that ran, although larger, didn't really crank the processer as much as Razzle Dazzle strained it to generate trippy graphics.
posted by vanoakenfold at 8:19 AM on February 13, 2006


I hate the term "software bloat". Software is for the most part more complicated because it does a hell of a lot more. The people writing software 20 years ago managed it because they made a hell of a lot of compromises to make it even barely useable. Lots of caching, lots of hacks/shortcuts, lots of things you plain couldn't do (or wouldn't try unless you wanted to wait a few hours for the computer to catch up).

The thousand fold increase in processor speed has mainly bought you the ability to do anything you want, usually instantly.
posted by cillit bang at 8:22 AM on February 13, 2006


Now this is a computer that is nearly 1000 times as fast as the first XTs, and its slow?!? I guess I'm showing my age, but I'm shocked and offended to even consider that that is so.
That may be so if you only consider the CPU core speed, but that is but one of many factors. Other rates -- like bus speeds, hard drive rotational speeds, hard drive actuator seek speeds -- have improved over the years but not nearly as fast as CPU speeds. Hard drives still have moving parts and still have mechanical constraints that dictate how fast they can rotate. And in many cases on a modern computer the CPU is mostly idle for the vast majority of tasks, simply sitting there waiting for all the other slower components to catch up.

So you can't just say "4.77 MHz -> 3.0 GHz" and conclude "why aren't things 644 times faster overall?"
posted by Rhomboid at 8:38 AM on February 13, 2006


i agree with the code bloat and feature bloat things that people have pointed out, and certainly there are bottlenecks in virtual ram ... but another factor is the great increase in graphic quality, which is probably responsible for the much larger sizes of today's programs

eye candy is probably the one thing that slows things down more than anything else ... plain programs with bare bones design are often very snappy on today's machines

i also think that software design hasn't kept up with hardware design
posted by pyramid termite at 8:46 AM on February 13, 2006


To summarize: If you ran DOS 3.3 on this new PC, it would be hella-fast.
posted by Wild_Eep at 8:59 AM on February 13, 2006


I also wouldn't take for granted the increasingly complex needs of software due to the environment it's now run in, namely large-scale networks. Early consumer operating systems like MS DOS, Windows, and Mac OS were written pretty close to the hardware. You'd run a program and it'd have free reign over the hardware and resources and the OS would step out of the way. The amount of "direct to the metal" capability has decreased with each successive revision -- it's hard to multitask that way, even harder to have network resources going in the background or to have several applications doing accelerated 3d rendering at the same time. There's overhead to take care of.


I remember when watching a video clip would take all of my computer's resources. Eventually I could multi-task, and I can even watch a DVD while coding, downloading, and doing half a dozen other tasks now. The problem is that now I want to do that while watching a H.264 encoded movie -- better quality, lower file sizes for the resolution, takes a lot of processing power to decode in realtime. Today, my computer is slow. But in a few years decoding 1080p video will be reasonably standard and that task will be trivial and I'll be bitching about something else.
posted by mikeh at 9:07 AM on February 13, 2006


The wisest thing my father ever said (and he's said many a wise thing) was this:

"The speed you will ever think is acceptable for your own computer is as fast as the fastest one you've ever seen."

so, essentially what mendel said.
posted by phearlez at 9:23 AM on February 13, 2006


Neal Stephenson explains:

My thoughts are more in line with those of Jaron Lanier, who points out that while hardware might be getting faster all the time, software is shit (I am paraphrasing his argument).
posted by hoverboards don't work on water at 1:10 PM on February 13, 2006


One bit I'd emphasize that's already been alluded to as to why modern XP installations end up slow even on fast hardware. Arrogant software, that regularly checks for updates to itself, pre-loads itself on start-up so it'll (theoretically) appear to be blindingly fast when or if you do launch it... in the meanwhile, they're consuming resources and slowing down everything else.
posted by Zed_Lopez at 2:10 PM on February 13, 2006


I am a programmer, and I think part of the problem is the culture in software development organizations. Basically, my boss is happy when I close bugs. The faster or more bugs I fix (a feature request is a bug), the happier he is. So it is better for me to do the minimum amount of work required to accomplish the task which I've been assigned. Usually, my work is graded on the following criteria, in order: a) It can't break/crash/etc. b) It must fulfill the requirements stated in the bug. c) It must be documented. The speed and efficiency of the code is hardly a blip on the radar.

So why isn't speed as issue for my boss? Because it isn't an issue for his boss, the CEO. The CEO only cares about giving customers what they want, which is features. In reality, customers usually don't notice how fast a bit of code is, or how much RAM and disk space it takes. When they do notice, they don't think to themselves “Gee, this software is badly written and too slow”. No, they think “MY computer is too slow for this”, because usually they need to run this software (or think they need to), and it is possible to increase the speed of their computer, while it generally isn't possible to make the software better.

I could write better software, but another issue is that profiling the code and rewriting as needed is one of the last steps in writing something. (at least it should be, premature optimization is bad) Usually by the time I'm at the point where I could do some profiling runs and see how to make it better, I'm being pressured to release the code to QA and move on to something else. I'd really like to take a day or two and make it as fast and efficient as I know how, but it's just not in my best interests to do so, it's better for me to move on to the next thing as soon as the code works.

If I had to tell you how much more efficient my code could be, I'd say that when I have spent the time to optimize it, the result has been anywhere from 25% faster to 2x faster, usually being somewhere in the 30% to 50% range without major rewriting. The problem is, since this program (like most) will only use any CPU in very short spikes (wait for something to happen, do something, wait for something to happen...) Nobody is going to notice unless they are on a very old machine. Since everyone in my office has computers that are less than 2 years old, nobody I know that will run this program would be able to notice my efforts.
posted by darkness at 2:11 PM on February 13, 2006


Zed_Lopez > ...Arrogant software...

Well, arrogant companies that make developers write these features. However, in all fairness, IMHO, windows itself should ask you whenever some software tries to make itself start on boot. Not just to keep down on startup-bloat, but to help prevent spyware from being installed.
posted by darkness at 2:24 PM on February 13, 2006


At home, a 266MMX is slow. At work, a 1GHz P3 is slow, or at least it is once I've got a hojillion programs running at once.

But I suspect, as other have said, in the context of your example it means "slow given its price", or even "slow for its clock speed".

AMDs are fast for their clock speed. Pentium Ms are fast for their power consumption. Every so often a chip pops up that's either much slower or faster than it "should be" given the typical progession of CPU speed.

(But, depending on your application, CPU speed is probably the least of your worries. I find that too little RAM or a slow hard drive have much more impact on the general performance of a PC these days. And for gaming, obviously you sink your money into the video card(s).)
posted by krisjohn at 3:14 PM on February 13, 2006


I too am a programmer.
A lot of the time I have a choice between developing something quickly or making it run fast. Back when computers were slow, I concentrated on the latter. Processor time was expensive. Now, processor time is a lot cheaper than my time so I concentrate more on developing quickly.

Strangely, with the shift towards internet applications (lots of users on one machine), its becoming more important for me to optimise code at the cost of quick development times.

So, developers optimise according to which resource costs the most money. As the cost of memory and processor speed has come down the cost of the programmer has become much more important. This is the main reason for "software bloat". Even though the feature set hasn't increased 10,000 fold, the price of developing advanced features (without caring about the speed) has dropped to such a level as to make them accessible to small users.
posted by seanyboy at 3:26 PM on February 13, 2006


I too have a 'thing' against Celerons and I would be loathe to purchase one. This may be baseless now.

An anecdotal story: About 4 years ago we kids chipped in and bought 'Dad' a computer to replace his WebTV. I found a deal on a 500 Mhz Celeron based machine. Being the computer guy and the kid who sees them the most I am regularon on this machine. I HATE THIS MACHINE! It is a dog! My 333 Mhz Pentium II is faster. I am continually amazed at the poor performance of this machine.

Now it may not be fair. It could be a REALLY slow hard drive or something else, but I blame the fact that it's a Celeron.
posted by JamesMessick at 4:57 PM on February 13, 2006


Install your OS to your RAM. Install your applications to your RAM. Watch how fast everything moves.

That's an interesting point you make. I was having a very similar discussion with one of my friends who's a dev over at MS.

I remember back in my 386DX40 days, if I wanted to recompile the WWIV BBS source code, I would put the compiler and the code in a ramdrive (using RAMDRIVE.SYS, of course), and it would FLY. So I asked my friend, "What ever happened to ramdrives? How come nobody uses them any more? What would happen if I put Windows in a ramdrive?

And he basically laughed at me. He said that a ramdrive doesn't offer a significant advantage over Windows' memory manager, and that I would be wasting my time. He did say, however, that he thought Windows' memory manager could possibly be improved so that it either didn't use virtual memory as much, or used it in a different way.

However, I'm not completely satisfied with his explanation. How could a ramdrive possibly not be faster?
posted by Afroblanco at 5:46 PM on February 13, 2006


I can think of a number of reasons for modern machines being "slow" in relative terms.

A) "Modern" OS are largely "legacy support" efforts, for which success = support of "legacy" hardware and software. Even when core parts of an OS, such as disk access routines, are outdated crap [warning: link to MS Word .doc file], it is tough to replace/improve them without breaking widely deployed software applications, hardware drivers, and other bits users and developers have come to rely on.

B) Most PC architectures are far from "optimized" for performance, because hardware "standards" are developed by coalitions of manufacturers whose economic and strategic interests align only temporarily, with minimal guidance/herding from Microsoft/Intel. Hardware innovators whose interests aren't mainline, don't do well in the long run, no matter how innovative their contributions. Bus architectures, I/O control, display, memory management, etc. are all low level functions that could be (and on higher end systems like IBM iSeries) offloaded to specialized sub-system hardware and firmware, to a much greater degree than they are in PC's. But the volume market for PC's is entirely cost driven, and there is little likelihood you'll see OS and application vendors plowing development time into high end hardware support anytime soon. Even where seemingly high-end hardware capabilities are "supported" (SMP, SLI, RAID) the PC implementations of these ideas are often crippled (see the NUMA FAQ for a comparison with SMP, as one example), compared to workstations and mid-range systems, and require significant specialized programming to deliver much benefit.

C) Ramdrives are the kind of idea that make PC developers and software engineers snicker or cuss, depending on their age, and the accuracy of their carbon based memory. What I remember of the ramdrive genesis was this:

In the 80386 chip introduction, Intel finally introduced a chip that potentially opened the way to 32 bit memory space, and which also introduced specialized hardware for managing large amounts of memory, in a time when memory was horribly expensive, and no OS existed for the chip that could use the capability, any way. For a long time, anybody with a lot more money than sense could buy an 80386 and enough RAM to be directly useless to DOS, maybe even set it up as "expanded" vs. "extended" RAM (remember those kludges?) and convert the excess to a "disk" via an OS "driver," and then access it using the dope BIOS disk access routines. Sure, it was a hundred times faster than the doggy hard disks of the day, but it was one of the worst bags-on-the-side anyone ever came up with, in terms of pure performance. The spreadsheet jocks loved it, though, as it was an early way to functionally overcome the 640K memory "barrier," but even they dumped their ramdisks when the first extended memory managers like QEMM and MS EMM386.SYS came along, and were soon being called by upgraded versions of Lotus 1-2-3 and other memory hogging applications. That's because ramdisks generally called for the processor to execute at least 50 times the code for each access of a "byte" in ramdisk as compared to accessing a "byte" in memory, and so computationally, assuming you ever had an OS that was capable of addressing the memory directly (which was the whole problem with DOS in the first place, that lead to this insanity), using the memory to buffer disk read and writes via page management was hugely more effective than treating the memory as if it were disk. By about 50X per access. Once DOS was dead, ramdisks should have been forgotten as the dead end hacks they were, but...

Lately, with memory prices dropping faster than a Dutch whore's drawers, I've seen the ramdisk idea hauled out again in lots of intraWeb discussions. Essentially, it's the same situation of people thinking that they can make better use of excess memory than the OS can, and in specialized circumstances, they may even be right. But emulating disk to use excess memory is still a bad idea, computationally, especially on any hardware more recent than a 286 level chip, since everything since the 80386 has included a hardware MMU, to which only the OS is likely to have access, anyway.

D) Some companies, such as Apple, Sun, and SGI (and once upon a time, Amiga, DEC, and others lost to competitive history), have long tried to position the combination of hardware/OS/application as a product, arguing that the possibility for optimization between hardware and software functions is far greater in a platform that is developed comprehensively. I think there is little doubt that you always could, and probably always will be able to buy something holistically developed as a system, that is significantly faster/better than a PC for some set of computational tasks, but I also think that PC's will continue to become more capable general purpose devices. At some point, if a market develops for "non legacy" PC's, a huge amount of OS and application cruft can be jettisoned, that will free system resources and reduce complexity, contributing to improved PC performance.

But you know what? I've been hearing about the death of the floppy drive for 10 years now, and yet, according to this guy, "In fact, the single-largest selling computer accessory in 2004, worldwide, was the external floppy drive, even as top PC and laptop vendors began shipping without floppy drives." Which kinda gets me back to my point "A"...
posted by paulsc at 3:38 AM on February 14, 2006


paulsc - thanks for your answer to my ramdrive question! I've beend wondering about that for quite a while. It really was a lot faster then my HDD at the time (which I believe was 340MB), but, yeah, it sounds like a real memory manager would have run circles around it.
posted by Afroblanco at 11:28 AM on February 14, 2006


« Older Lost File Menu in MS word   |   Working at RedHat? Newer »
This thread is closed to new comments.