Why can you never have enough memory?
October 19, 2008 12:25 AM Subscribe
Why is there no such thing as too much memory?
It's a truism that there's you can never have enough RAM, but why? Assuming that that you have enough that your peak memory consumption in daily use is significantly less than the RAM you have installed, why would adding more improve performance? Is it simply that the memory management even in the most current OSes is lame enough that they page excessively without a huge buffer of empty real memory?
I'd think this was an embarrassingly easy question, but I've asked a couple of serious hardware geek acquaintances and the best they came up with was even less specific than my guess: "something to do with the memory management."
It's a truism that there's you can never have enough RAM, but why? Assuming that that you have enough that your peak memory consumption in daily use is significantly less than the RAM you have installed, why would adding more improve performance? Is it simply that the memory management even in the most current OSes is lame enough that they page excessively without a huge buffer of empty real memory?
I'd think this was an embarrassingly easy question, but I've asked a couple of serious hardware geek acquaintances and the best they came up with was even less specific than my guess: "something to do with the memory management."
Best answer: It's to do with preemptive caching. Basically, the operating system reads files into memory that it thinks you're going to access. When you do access them, it's orders of magnitude faster than if they were on disk.
Now, given more memory the number of cached files will be larger, and so the likelihood that any given file will be accessed in cache will be greater, so the CPU won't be waiting for IO, so you will have few wasted cycles, and better performance.
posted by claudius at 1:14 AM on October 19, 2008
Now, given more memory the number of cached files will be larger, and so the likelihood that any given file will be accessed in cache will be greater, so the CPU won't be waiting for IO, so you will have few wasted cycles, and better performance.
posted by claudius at 1:14 AM on October 19, 2008
I would think 2GB would be enough for my uses, but I've found that Safari is one leaky program.
My machine's uptime is 7 days now, and Activity Monitor says I have 470MB free, largely thanks to Safari taking 400MB to display a single window -- this text-only ask mefi page!
Unbelievably bad, and when I have two or three misbehaving apps going, I can run out of physical RAM and the damn machine crawls to a halt, almost.
The point of lots of RAM is to avoid paging situations when running out of memory, I don't think aggressive memory pre-caching would be that noticeable.
posted by troy at 1:22 AM on October 19, 2008
My machine's uptime is 7 days now, and Activity Monitor says I have 470MB free, largely thanks to Safari taking 400MB to display a single window -- this text-only ask mefi page!
Unbelievably bad, and when I have two or three misbehaving apps going, I can run out of physical RAM and the damn machine crawls to a halt, almost.
The point of lots of RAM is to avoid paging situations when running out of memory, I don't think aggressive memory pre-caching would be that noticeable.
posted by troy at 1:22 AM on October 19, 2008
Modern operating systems will use all RAM - not just what you need for currently-running programs. They will use it as a hard disk cache, mostly. So even if you never fill RAM with programs, all of it is being used to improve performance. At least, it's true for Unix-like OSs. I assume Windows has implemented this by now.
Is it simply that the memory management even in the most current OSes is lame enough that they page excessively without a huge buffer of empty real memory?
Memory management is pretty sophisticated these days. If you're seeing unexpected disk activity, you might want to look into other causes like a virus infection, or utilities like search indexers or virus scanners.
I'd think this was an embarrassingly easy question, but I've asked a couple of serious hardware geek acquaintances and the best they came up with was even less specific than my guess: "something to do with the memory management."
Some hardware geeks are into improving game performance, which means obsessing over the latest video cards and CPUs, and not being too interested in more mundane things. If you're primarily interested in pushing one app at a time to maximum performance, memory management may not matter to you.
posted by yath at 1:31 AM on October 19, 2008
Is it simply that the memory management even in the most current OSes is lame enough that they page excessively without a huge buffer of empty real memory?
Memory management is pretty sophisticated these days. If you're seeing unexpected disk activity, you might want to look into other causes like a virus infection, or utilities like search indexers or virus scanners.
I'd think this was an embarrassingly easy question, but I've asked a couple of serious hardware geek acquaintances and the best they came up with was even less specific than my guess: "something to do with the memory management."
Some hardware geeks are into improving game performance, which means obsessing over the latest video cards and CPUs, and not being too interested in more mundane things. If you're primarily interested in pushing one app at a time to maximum performance, memory management may not matter to you.
posted by yath at 1:31 AM on October 19, 2008
spaceman_spiff: " Your machine can only address so much of it - used to be something like 4gb, now it's ... more."
More specifically, only 64-bit systems can handle 4GB or more.
posted by IndigoRain at 1:32 AM on October 19, 2008 [1 favorite]
More specifically, only 64-bit systems can handle 4GB or more.
posted by IndigoRain at 1:32 AM on October 19, 2008 [1 favorite]
Best answer: 1/ Pretty much all modern software is written with the assumption of more memory being available than the previous generations of software. Once upon a time there was a joke that the emacs text editor stood for "Eight Megs And Constantly Swapping", by way of reference to it's allegedly unreasonable requirements. These days, well, you're looking and hundreds of megs to launch word processors or web browsers or whatever. Even if you have "enough" today, it won't be "enough" in a year or two.
See also.
2/ Your data grows, too. Once upon a time desktop publishing in black and white was considered novel and exciting. A digital camera with a 1 megapixel sensor was a high-end pro tool. My camera phone has a 3.2 MP sensor, my DSLR is 8 MP, and I know people who make movies and do non-linear video editing on their desktop, chucking around hundreds of gigs of data quite happily,
3/ The more memory you have, the more you can multi-task. I've got 2 GB of RAM in part because I want to run a couple of instances of my MMORPG client while I write this answer and manage my photo libraray. When I had a gig of memory earlier this year I would be swapping just from a web browser and the MMO client.
4/ Finally, any, good, modern operating system can use otherwise unoccupied RAM to cache things that would normally live on the filesystem, which will generally make stuff run faster.
posted by rodgerd at 1:38 AM on October 19, 2008
See also.
2/ Your data grows, too. Once upon a time desktop publishing in black and white was considered novel and exciting. A digital camera with a 1 megapixel sensor was a high-end pro tool. My camera phone has a 3.2 MP sensor, my DSLR is 8 MP, and I know people who make movies and do non-linear video editing on their desktop, chucking around hundreds of gigs of data quite happily,
3/ The more memory you have, the more you can multi-task. I've got 2 GB of RAM in part because I want to run a couple of instances of my MMORPG client while I write this answer and manage my photo libraray. When I had a gig of memory earlier this year I would be swapping just from a web browser and the MMO client.
4/ Finally, any, good, modern operating system can use otherwise unoccupied RAM to cache things that would normally live on the filesystem, which will generally make stuff run faster.
posted by rodgerd at 1:38 AM on October 19, 2008
Well, you've got two different statements there. You seem to be saying that "you can never have enough memory" is the same as "you can never have too much memory", which isn't the case. (I suspect you're having trouble getting answers because the geeks you're talking to think you're asking a hardware question, when in fact I think what you're really asking is an aphorism question.)
So, when they say you can't ever have "too much" memory, what they mean is that there is no real downside to adding memory. The extra memory will just sit quietly, not getting used, and not causing any problems by its mere existence. Adding more memory won't necessarily improve performance but because it won't degrade performance you can pretty much add as much as you want. Therefore, there's (theoretically) no upper limit (meaning there's no point at which you have too much memory, even though you may be well past the point of having enough memory).
posted by stefanie at 1:40 AM on October 19, 2008 [2 favorites]
So, when they say you can't ever have "too much" memory, what they mean is that there is no real downside to adding memory. The extra memory will just sit quietly, not getting used, and not causing any problems by its mere existence. Adding more memory won't necessarily improve performance but because it won't degrade performance you can pretty much add as much as you want. Therefore, there's (theoretically) no upper limit (meaning there's no point at which you have too much memory, even though you may be well past the point of having enough memory).
posted by stefanie at 1:40 AM on October 19, 2008 [2 favorites]
To put it simply, you can't have too much RAM because RAM is faster than your hard drive; page files can slow the computer down tremendously and ever-increasing amounts of RAM stave this off. Until hard drives (or an equivalent) are as fast as RAM or out of the picture completely, RAM will be desirable.
Essentially hard drive size is such that we haven't come close to having enough RAM around for the average user to forego hard drive usage completely (and of course there are other obstacles as well, given RAM's volatile nature.) But RAM is what allows the computer to exceed the slow speeds of your hard drive (among other things, of course.) It's much less to do with any programming aspect, and more to do with the physical limitations of current technology.
It would appear that the technology is moving slowly towards a one-stop shop; non-volatile RAM that retains its speed. This would homogenize two traditionally separate forms of storage. Of course there are significant barriers to this currently, and a wide number of intermediary formats are likely. But until page files are no longer slower than RAM, people will be throwing more RAM into their systems to avoid them.
posted by Phyltre at 1:52 AM on October 19, 2008
Essentially hard drive size is such that we haven't come close to having enough RAM around for the average user to forego hard drive usage completely (and of course there are other obstacles as well, given RAM's volatile nature.) But RAM is what allows the computer to exceed the slow speeds of your hard drive (among other things, of course.) It's much less to do with any programming aspect, and more to do with the physical limitations of current technology.
It would appear that the technology is moving slowly towards a one-stop shop; non-volatile RAM that retains its speed. This would homogenize two traditionally separate forms of storage. Of course there are significant barriers to this currently, and a wide number of intermediary formats are likely. But until page files are no longer slower than RAM, people will be throwing more RAM into their systems to avoid them.
posted by Phyltre at 1:52 AM on October 19, 2008
I'd say that the truism is wrong. Effective and cost-efficient performance tuning must happen at the bottleneck.
That's something that follows from basic queuing theory. Most PC tweakers are not really into queuing theory or serious performance analysis, they apply a collection of rules of thumb and past experience, that's why they come up with stuff like that.
posted by dhoe at 2:22 AM on October 19, 2008
That's something that follows from basic queuing theory. Most PC tweakers are not really into queuing theory or serious performance analysis, they apply a collection of rules of thumb and past experience, that's why they come up with stuff like that.
posted by dhoe at 2:22 AM on October 19, 2008
RAM is cheap and dropping. RAM is fast. If you are going to spend $50 more on a computer, spending it on ram will likely have the most noticeable effect.
However, most computers can only use about 3GB. And even among new ones it is rare to be able to use more than 16 (this is an artificial limitation, but a limitation nonetheless). So, from a purely physical standpoint, you can have more ram than your computer will use, which would be, well, useless.
I think the origin of the phrase is that you will see performance gains right up to the maximum amount your system can support. The reasons for this are mentioned above: predictive caching, minimizing disk writes, etc.
posted by Nothing at 3:01 AM on October 19, 2008
However, most computers can only use about 3GB. And even among new ones it is rare to be able to use more than 16 (this is an artificial limitation, but a limitation nonetheless). So, from a purely physical standpoint, you can have more ram than your computer will use, which would be, well, useless.
I think the origin of the phrase is that you will see performance gains right up to the maximum amount your system can support. The reasons for this are mentioned above: predictive caching, minimizing disk writes, etc.
posted by Nothing at 3:01 AM on October 19, 2008
The technical idea behind for "more RAM is good because RAM is faster than your hard drive" is the Memory Hierarchy.
If you're running 64-bit linux with 64-bit apps, there's no barrier to using as much memory as your motherboard supports, with the next speedbump coming at 256 TB RAM or so. If you're using 32-bit linux, there's a per-application limit that is typically 3GB, and supporting more than 3GB total may require installation of a special "server" kernel which gets you support for 64GB or the motherboard limit, whichever is less.
Other OSes like OS X and Vista probably have different limitations, but if they haven't also shattered the 4GB limit in the 13 years since the Pentium Pro was released, shame on them. (It looks like Microsoft uses total addressable RAM as a bullet point with different arbitrary maximums for different products)
posted by jepler at 6:10 AM on October 19, 2008
If you're running 64-bit linux with 64-bit apps, there's no barrier to using as much memory as your motherboard supports, with the next speedbump coming at 256 TB RAM or so. If you're using 32-bit linux, there's a per-application limit that is typically 3GB, and supporting more than 3GB total may require installation of a special "server" kernel which gets you support for 64GB or the motherboard limit, whichever is less.
Other OSes like OS X and Vista probably have different limitations, but if they haven't also shattered the 4GB limit in the 13 years since the Pentium Pro was released, shame on them. (It looks like Microsoft uses total addressable RAM as a bullet point with different arbitrary maximums for different products)
posted by jepler at 6:10 AM on October 19, 2008
More specifically, only 64-bit systems can handle 4GB or more.
Many 32-bit systems can handle more than that with PAE-enabled kernels.
posted by cmonkey at 6:21 AM on October 19, 2008
Many 32-bit systems can handle more than that with PAE-enabled kernels.
posted by cmonkey at 6:21 AM on October 19, 2008
I liken RAM to counter space in a kitchen. It allows the computer to keep instructions available in memory and not have to get them out & put them back. Given that computers have limited RAM slots, and that memory sticks for a given computer will come in limited sizes, you can't reasonably put too much RAM in your computer. One could build a computer with more slots and put in RAM that would be truly superfluous. People often skimp on RAM, and would see better performance if they added memory, hence the saying.
posted by theora55 at 7:57 AM on October 19, 2008
posted by theora55 at 7:57 AM on October 19, 2008
From the perspective of server software, there's definitely a concept of "enough RAM". Most server apps have a maximum working set, the total amount of memory required to do its job. Ie: Metafilter's database may fit in 2 gigs of RAM. So if Mefi's database server has 4 gigs of RAM, it can hold the entire database in memory and have plenty of room left over for random system tasks. The entire working set fits in RAM, and more RAM won't make it faster.
There are software costs to having more RAM than "enough". So on a high end server it can pay to not put in any more RAM. The main cost is at the 2/3/4 gig barrier, since addressing more requires either paging tricks on a 32 bit OS (with concomitant overhead in accessing that memory) or a 64 bit OS (with 2x the overhead for every single memory pointer). These costs are relatively minor, I'd say a 1% to 5% performance hit, but they're not zero.
There's also hardware costs to a motherboard being able to address more RAM than "enough". The more bits are addressable, the more the memory controller and CPU caches have to keep track of. Again, it's a minor effect, but it's not zero.
But that's all pretty theoretical stuff. In the real world you never quite know what your working set is, and if you're building a bunch of servers you're probably more driven by the cost of the RAM chips than a few percentage of overhead from having too much RAM.
posted by Nelson at 8:21 AM on October 19, 2008
There are software costs to having more RAM than "enough". So on a high end server it can pay to not put in any more RAM. The main cost is at the 2/3/4 gig barrier, since addressing more requires either paging tricks on a 32 bit OS (with concomitant overhead in accessing that memory) or a 64 bit OS (with 2x the overhead for every single memory pointer). These costs are relatively minor, I'd say a 1% to 5% performance hit, but they're not zero.
There's also hardware costs to a motherboard being able to address more RAM than "enough". The more bits are addressable, the more the memory controller and CPU caches have to keep track of. Again, it's a minor effect, but it's not zero.
But that's all pretty theoretical stuff. In the real world you never quite know what your working set is, and if you're building a bunch of servers you're probably more driven by the cost of the RAM chips than a few percentage of overhead from having too much RAM.
posted by Nelson at 8:21 AM on October 19, 2008
Counterpoint - laptop, or virtual machine that's going to be hibernating a lot? Less memory (specifically - just enough for the task at hand) is better and gives a much faster sleep/restore.
Which goes to show, it all depends on what you're doing ...
posted by devbrain at 9:54 AM on October 19, 2008
Which goes to show, it all depends on what you're doing ...
posted by devbrain at 9:54 AM on October 19, 2008
Your machine can only address so much of it - used to be something like 4gb, now it's ... more."
Up to 16 exabytes, in theory.
posted by chrisamiller at 10:14 AM on October 19, 2008
Up to 16 exabytes, in theory.
posted by chrisamiller at 10:14 AM on October 19, 2008
Best answer: I also dont think that truism is correct but:
1. Your peak use this month is not your peak use next month. Oh, quicktime, itunes, and firefox have new versions out. They all use more ram than before. Or your new job makes you run a huge suite of apps. Who knows. These things generally are not static for typical users.
2. Your OS will swap less onto disk if it has more in real memory, thus avoiding slowdowns.
3. Some OS and applications do pre-emtping caching.
4. Some applications will make use of RAM only if its there. Server apps and games do this. For instance Battlefield2 will use all available ram and put more things into ram thus avoiding putting things onto disk and causing performance issues.
5. Planning for the future. Sure, 2 gigs is great for today, but if you want to put Windows 7 on this in a couple years then its a good idea to max it out to 4gigs.
6. More ram is kind of insurance against badly written programs with memory leaks. The leaks arent as obvious and will require the computer to be rebooted less often.
7. RAM is cheap. Considering all the advantages of more ram, its usually worth buying even if you dont plan to use it right now.
posted by damn dirty ape at 11:58 AM on October 19, 2008
1. Your peak use this month is not your peak use next month. Oh, quicktime, itunes, and firefox have new versions out. They all use more ram than before. Or your new job makes you run a huge suite of apps. Who knows. These things generally are not static for typical users.
2. Your OS will swap less onto disk if it has more in real memory, thus avoiding slowdowns.
3. Some OS and applications do pre-emtping caching.
4. Some applications will make use of RAM only if its there. Server apps and games do this. For instance Battlefield2 will use all available ram and put more things into ram thus avoiding putting things onto disk and causing performance issues.
5. Planning for the future. Sure, 2 gigs is great for today, but if you want to put Windows 7 on this in a couple years then its a good idea to max it out to 4gigs.
6. More ram is kind of insurance against badly written programs with memory leaks. The leaks arent as obvious and will require the computer to be rebooted less often.
7. RAM is cheap. Considering all the advantages of more ram, its usually worth buying even if you dont plan to use it right now.
posted by damn dirty ape at 11:58 AM on October 19, 2008
Response by poster: Thanks, everyone. To address the simple language issue raised by stefanie, I do think that "you can never have too much RAM" is intended to mean "you can never have so much RAM that it stops being of value," which is close enough to "you can never have enough." So I do think the expressions are more or less equivalent.
A lot of these answers seem to misconstrue me--they explain why RAM is useful in general (because it's faster than the hard disk), which is part of basic technical literacy. My question was a little subtler. One of the premises was that the entire virtual memory space claimed by the applications was less than the physical RAM, so that you're not paging application memory to the disk at all (or shouldn't be).
I know that OSes use spare RAM for file access caching, but like troy, I took that to be a negligible performance factor, especially since you'd inevitably get diminishing returns: presumably the OS fills memory with hard disk data in the order of its perceived likelihood that you're going to use it, and by the 500th meg the guesses would probably be pretty lame.
Still, those who suggested caching will get best answers, because it seems to be the answer, however skeptical I may be about the significance of the effect.
Thanks again, you guys were awesome as always.
posted by abcde at 2:11 PM on October 19, 2008
A lot of these answers seem to misconstrue me--they explain why RAM is useful in general (because it's faster than the hard disk), which is part of basic technical literacy. My question was a little subtler. One of the premises was that the entire virtual memory space claimed by the applications was less than the physical RAM, so that you're not paging application memory to the disk at all (or shouldn't be).
I know that OSes use spare RAM for file access caching, but like troy, I took that to be a negligible performance factor, especially since you'd inevitably get diminishing returns: presumably the OS fills memory with hard disk data in the order of its perceived likelihood that you're going to use it, and by the 500th meg the guesses would probably be pretty lame.
Still, those who suggested caching will get best answers, because it seems to be the answer, however skeptical I may be about the significance of the effect.
Thanks again, you guys were awesome as always.
posted by abcde at 2:11 PM on October 19, 2008
This thread is closed to new comments.
More to the point, what if you could put everything on your computer in RAM? Not just things that are running - everything. You'd never have to transfer programs or documents or anything else from your hard drive (which is slow) to your RAM (which is faster). In fact, there are machines that load ~everything into a RAM disk, usually for databases and the like.
posted by spaceman_spiff at 1:06 AM on October 19, 2008