Is it possible to designate RAM for only "write" purposes?
January 20, 2011 7:13 PM   Subscribe

If one sets up a RAM stick as temporary storage to reduce writes on a SSD (via http://goo.gl/7cem) or for any other reason, can that particular RAM stick to be utilized only in this manner?

I've currently got 3 sticks of ram, 2 identical 2 gb sticks and a different 1 gb stick. Well due to recent adventures in overclocking i ended up removing the 1 gb stick as having it in resulted in worse benchmarks. Now, I want to know if I can stick it back in but have it only ever ID'ed by my computer as a (albeit small) hard drive. That way it won't be read as "RAM" by my PC thereby lowering benchmarks, but it can still be used to speed up applications such as firefox by storing caches to it. Let me know if this question needs clarification.
posted by robobrent to Computers & Internet (10 answers total) 1 user marked this as a favorite
 
Best answer: No.

You might be able to convince an OS to do this (though: good luck!), but the slowdowns are coming from the motherboard, and you can't fool the motherboard - it needs to treat it as RAM not as a hard disk or whatever. Having mismatched sizes or an odd number of DIMMs or whatever makes it have to do extra work to talk to any of them. Read your motherboard manual about optimal configuration - you might be able to add an additional 1GB stick and get full performance back.
posted by aubilenon at 7:21 PM on January 20, 2011


Best answer: I'm not a Windows guy, but I am a programmer.

The ramdisk you create is not really bound to any particular stick of RAM. In fact, the ramdisk will be located at whatever addresses in memory the system can find to host it. That *may* be all on one stick, but could easily span two of them.

On a regular x86(-64) processor, all of the RAM you connect to the computer is treated as one giant pool. By setting up a ramdisk, you're merely telling Windows that some of that RAM should be treated as a disk. You likely have little or no control over where it actually goes--nor, I assure you, do you *want* to have such control.

And it's almost certainly going to show up as RAM in your various benchmarking stuff. It's RAM first and foremost, and is treated as a drive only because of the ramdisk driver you install.

Basically, all of this is true because a ramdisk is what we call "a hack". This isn't some sort of magical technology that turns RAM into a disk. Rather, this is a clever driver that loads into the system, grabs up a huge pool of regular old RAM, and then lies to Windows by saying, "Uh, so, this is totally a disk drive, man. Trust me."
posted by Netzapper at 7:23 PM on January 20, 2011 [4 favorites]


Not without exotic hardware.
posted by Chuckles at 7:28 PM on January 20, 2011


You can't do this, and moreover, with wear-leveled SSDs you really shouldn't need to do it. You will get tired of the machine and replace it long before the SSD wears out.
posted by axiom at 8:01 PM on January 20, 2011 [1 favorite]


Best answer: What you're describing is called, in fancy tech speak, hierarchical storage. The idea is that really fast storage is expensive, so you don't have as much of it, but slow storage is cheap so you have a lot more. So the more recently/frequently something is accessed the more likely it is to be cached in faster storage. Every modern computer uses this system, and it basically looks like:

- instruction decode/trace caches
- L1 cache
- L2 cache
- L3 cache (on some models)
- RAM
- Harddisk

The same data can be copied into one or all of the layers. For example, a block that's been recently read might be in L2, L3, RAM, and harddisk. You can even get fancier because most harddisks now have a read and write cache internally. And SCSI controllers maintain their own cache on the card! I'm working with a GPFS system right now that has another layer below the harddisk where stuff exists on tape, and is archived/fetched on demand.

My point: your computer is already using RAM as a cache for harddisk data! The details depend on operating systems and filesystems, but in general when you write a file to a harddrive it isn't immediately sent to disk. It exists in RAM for awhile until the OS gets around to writing it out. This is why it is so very important to shutdown your computer properly and not just unplug it: important OS files might only exist in RAM and not on disk yet.

Free memory in a modern operating system is never really idle. The OS will use as much of it as possible to cache files and speed up disk access. What you want to do -- reduce writes to the SSD -- is already being done as much as possible. On Linux you could tune it to be more agressive about holding data in RAM longer sacrificing integrity in the case of failure, but I don't think Windows has any knobs for this.
posted by sbutler at 8:02 PM on January 20, 2011 [2 favorites]


Another vote for "no," with a slightly different explanation.

The short version is, the RAM that you plug in to a standard mainboard is all recognized and initialized by the BIOS before Windows ever loads. So all Windows ever sees is "You have 5 GB. Go nuts."

Now, the DIMMs you plug in might also be individually identifiable by the PCI bus capturing the SPD info and throwing it in a register somewhere, so that you can tell from some config window somewhere how many you have plugged in, etc. But there is no user-accessible system I am aware of for correlating memory addresses with physical memory chips, let alone telling Windows to only use certain addresses (and thus certain chips / DIMMs) for a certain purpose.
posted by rkent at 9:46 PM on January 20, 2011


Actually the answer should be yes.

Basically, you're not using any particular DIMM for your ramdisk, just "RAM" in general. But, when you actually start running low on memory, Your OS will start taking the unused parts of memory and writing them to your hard drive - a process called paging, because it goes by 4k chunks called pages.

As far as the programs running are concerned, this paged out memory looks just like any other memory.

So lets say you have 8gb of ram, and you allocate 4g for a ramdisk. If you run a program that requires 5gb of memory, the OS should page out some of the contents of the ramdisk, or whatever it feels is less likely to be needed later.

So, unless the ramdisk program sets some kind of 'don't page me bro' tag on the memory, it should get treated just like any other block of memory and paged in and out as needed.

If your 5gb app needs data on the RAMDisk, though, it could slow things down.
posted by delmoi at 11:01 PM on January 20, 2011


Er, wait. I think I read your question wrong. The answer to the specific question is no, you can't separate out the DIMM from the rest of the system.
posted by delmoi at 11:03 PM on January 20, 2011


Best answer: I've currently got 3 sticks of ram, 2 identical 2 gb sticks and a different 1 gb stick.

I'm assuming this 3rd "different" stick has a lower clock speed, which is why it is lowering your benchmarks. So the answer is a little bit of "yes" and "no" (gotta love all the different answers!)

Yes, you can allocate a program or driver to a specific reserved spot in memory, which can reside entirely on the stick itself. I've never had to do this, but I know it could be possible through a C, ASM, or maybe even a debug wrapper that reduces the scope of memory the driver can access to a pre-determined block (basically trick it into thinking that's all there is). This would not be without some challenges, as it would require some high-level knowledge on how the kernel in relation to data buses work. (short answer...not aware of anyone that's done this and made it public)

As far as that having the desired effect you're looking for, the answer is no. Before you even get to your operating system your BIOS has already dropped the clockspeed of your 2 faster DIMMS to match that of the slowest. So the result will still be lower benchmarks. What you'll probably need to do instead is purchase a DIMM identical to the original 2 then set up you caching. The plus side is you won't have to worry about which stick it ends up on from that point.
posted by samsara at 5:25 AM on January 21, 2011


So, unless the ramdisk program sets some kind of 'don't page me bro' tag on the memory

Every ramdisk driver I've ever seen for windows uses memory from the non-paged pool to create the volume. The reason IIRC had something to do with the filesystem layer at that level not being re-entrant, i.e. it cannot service a page fault that occurrs during a low-level block read because that would require initiating a different low-level block read. So the memory you use to create a ramdisk is pinned and cannot be swapped out until you unmount the volume and free the memory.
posted by Rhomboid at 9:56 PM on January 21, 2011


« Older Mini garlic cloves do not really keep vampires...   |   What are the people at the airport typing? Newer »
This thread is closed to new comments.