Need suggestions for a Xen-compatible rackmount server.
January 28, 2008 1:16 PM   Subscribe

We need to buy two rackmount servers that will play nice with Xen (and Linux). Budget: approx $10k. Can you suggest a configuration, brand, and/or vendor?

I've been asked to price out a pair of servers for my company but I'm way out of the loop with contemporary hardware trends, especially server-class hardware. I'm not particularly interested in spending man hours to save a few bucks, and we definitely DO NOT WANT to hand-build or buy a whitebox -- we need a vendor who offers knowledgeable tech support on their hardware. FWIW, we don't have a dedicated sysadmin right now (which is why I'm asking here), but probably will within the next 3-6 months.

Parameters:

1. Should be designed to have Linux installed (preferred flavors are CentOS and Debian). Should support Xen. Unless there's a good argument that it's pointless, should have Intel processors that support VT.

2. Obvious: Hardware RAID. Must be rack-mountable. Space/power not a huge concern for now, but prefer under 2U.

3. We'll be buying two; one will be used to provide web/mail/etc services and will probably run a Xen hypervisor with all services being installed on Xen slices. The second one will be a dedicated file/database (likely MySQL) server. We're expecting to spend a little more on the database server (which will have additional disk space).

4. Pretty sure we want to go with a major manufacturer (Dell, HP,
etc) because I assume this will make management simpler as we add more machines and mean replacement parts have generally better availability than a white-box or home-built. Ideally next time we buy a server (which would probably be before the end of the year), we'd just order another couple of whatever we went with this time.

5. We are looking at either Zimbra or Scalix. We'd probably want to start with it on a Xen slice then migrate it to a full machine as our number of seats grows -- right now we're around 6 seats but may be scaling up to 50 before the year is up. I'm also interested in other possible mail options that 1) have Outlook support, 2) allow calendar sharing and 3) have mobile support. We'd prefer to run OSS, but are not entirely opposed to closed source if it'll meet our needs better.
posted by fishfucker to Computers & Internet (30 answers total) 3 users marked this as a favorite
 
Response by poster: Executive summary:

We need two servers that are good choices for Xen virtualization.
Primary purpose: 1) web/internet services, 2) database server.
We can spend up to $10k.
We DO NOT WANT to hand-build or go white-box.

I'm looking for suggestions like "Buy a XY2000 from Dell because we use them at my work to do A,B,C", or "You'll probably want to avoid option XYZ because virtualization doesn't work well with it", or "only a complete moron would try to run a virtualized mail server because X, Y, and Z".
posted by fishfucker at 1:20 PM on January 28, 2008


Best answer: Buy HP DL360s. Pick a model that is listed as a 'Smart Buy' on the HP website, because they come cheaper, and make sure you get ones that have dual power supplies. Buy an 'iLO advanced' license for them, so you can do remote installs and other such fun things.

I've personally overseen the purchase of over $300k of that exact model, filled racks with them. Good, reliable, nice boxes.

Only posible downside, they don't have tons of internal storage (they can hold 6 little SAS drives), but if you need tons of storage, you probably want a SAN or a NAS anyway.
posted by Tacos Are Pretty Great at 2:05 PM on January 28, 2008 [1 favorite]


Link.

Just add RAM.
posted by Tacos Are Pretty Great at 2:10 PM on January 28, 2008


Database servers don't virtualize especially well. You can do it, but you don't really get the benefits that you do from virtualizing application servers.

I haven't worked with Xen, but I have worked quite a bit with VMware. I'd recommend that you check out their hardware sizing guides, etc, because I suspect quite a bit of it would be applicable here too. VMware ESX is a customized Linux OS.
posted by me & my monkey at 2:17 PM on January 28, 2008


As for your mail options, have you considered Google Apps For Your Domain? You can get Premier Edition for $50 per seat, and it gives you the options you mentioned along with the ability not to have to manage the infrastructure. The Postini module included with it is nice. There's even a free academic version, if you qualify for it.
posted by me & my monkey at 2:19 PM on January 28, 2008


If you are running zimbra don't bother with virtualization. The installer sets up everything in one go and breaking up the parts after that would probably be a hassle. Not that I have much zimbra experience though.

Bought a couple of DL360 servers few months back and compared to the dell 1435 servers I bought before that I'd say that:
* I prefer the HP's 4-6 2,5 inch bays to the Dell's 2 3,5
* The HP's gave better subjective impression of quality
* The Dell's do not play nicely with Debian (at least for sucky sysadmin me)
* Ah, what the hell I cant bother listing arguments anymore. The HP's were just nicer and I'll never buy dell again. And what Tacos said
posted by uandt at 2:26 PM on January 28, 2008


Response by poster: Thanks, guys, these are exactly the type of answers I'm looking for! I will take a closer look at the DL360.

To answer the question about Google Apps: we haven't considered it, but I'll read through what they've got -- email-wise, we need outlook integration (for calendar, contacts + mail) and the ability to host in-house. It doesn't seem like either of these is a possibility for google apps.
posted by fishfucker at 2:54 PM on January 28, 2008


Fourthing the DL360. They are very nice boxes, and the iLO options are very nice to have. We use them exclusively here at work and they are very sturdy, easy to set up, have excellent support, etc.
posted by yellowbkpk at 3:23 PM on January 28, 2008


I'm going to second Tacos Are Pretty Great's suggestion. We use similar HP units in our RHEL deployments at my job. Though we are moving to blades for a lot of stuff. If this is a project that will expand in the future, I'd recommend also checking out HP's blade systems as well.
posted by jrishel at 3:31 PM on January 28, 2008


Response by poster: also, forgive my ignorance, but what's the general idea with HDs here? is there an advantage/need to buying HP-branded HDs, or can we just shove third-party paired/matching SAS drives in there? Can we mix-and-match sizes with RAID mirroring if we have at least a pair of each? (ie, can we have two mirrored 100GB drives and two mirrored 50GB drives?).
posted by fishfucker at 3:39 PM on January 28, 2008


Although you know about it, watch the VT thing and triple check it.

We set up a virtualization box recently and got burned when the sales guy made some assumptions about VT support.

We were using VMware not Xen, but it was still a huge headache to learn we had spent two days slogging through weird host OS issues (RAID drivers etc.) only to find that there was no way to use VMware on the box at all and that we needed to start over.
posted by mrbugsentry at 3:41 PM on January 28, 2008


I wouldnt mix and match your drive sizes if you can help it, you'll have more flexibility that way, say you wanna go RAID6 or something. Otherwise, you'll have to create separate virtual disks for each pair. There's no real advantage with going with HP branded drives except for the sake of having warranty through them, they'll just be rebranded Seagates probably anyhow.
posted by signalnine at 4:08 PM on January 28, 2008


Add another vote to the HP column.
posted by Silvertree at 4:23 PM on January 28, 2008


At work, we've been buying servers from ASA Computers and I've got nothing but good things to say about them. We usually use the website as a starting point and then email them for a custom configuration.

We've especially been happy that they do a 36-hour burn-in running our OS of choice (Ubuntu Gutsy); most other whitebox manufacturers either do no burn-in at all or else only use Windows.
posted by jacobian at 4:25 PM on January 28, 2008


depending on the controller, you may be locked into hp hard drives -- i bought an ML350, and unless you want to unload real $$ for SAS, you can't get adequate performance out of SATA due to hardware 'limitations' with the cheaper controllers.

we use aslab rackmount boxes. friendly support, cheap boxes, work with any hardware, but they are not hp or dell. to us, that is a benefit. link is aslab.com or aslabs.com.
posted by Geckwoistmeinauto at 5:06 PM on January 28, 2008


we just bought 4 ibm eservers second hand from ebay. Pentuim 3's bit we after fast IO not processor. 200 bucks each. They are good
posted by mattoxic at 5:47 PM on January 28, 2008


Dell 2950 or 6950 are probably the most commonly used Dell boxes out there, on par with the DL360. I cannot comment on them directly, but I work with a lot of customers implementing virtualization and these seem like the most common boxes out there. Companies I've worked for have always bought Dell and I have heard no complains about the hardware. Most sysadmins love DRAC cards to death. But I assume HP has something similar.
posted by GuyZero at 6:57 PM on January 28, 2008


Best answer: We're making heavy use of Xen VMs on Dell 1-U hardware of various types running OpenSUSE. Dell would play nicer with RedHat, and honestly I'd buy IBMs for SuSE because they work very closely with the SuSE dev team to build kernels for SLES. We have a 4-hour service contract on these Dell boxes and if the box isn't down just usually RMA parts and have them the next morning.

Any of the major brands will sell hardware with n-levels of redundancy and good warranties. I'm actually partial to HP 2-u servers like the DL385 -- I like the storage options, the durability of the hardware, and the maintenance interfaces.

I work for Texas A&M University in Research and Graduate Studies now. We have all Dell hardware in RGS. Before, I worked for TAMU Athletics, and we had all HP hardware and served up over a TB of website and streaming video a month from our clusters.

You might also look at getting a vendor-compatible SAN and putting the VMs on the SAN, that way you can add more boxes later if you need to without stretching too hard, and you can shift instances from box to box at a moment's notice. We're doing that using Oracle Cluster File System because it'll allow simultaneous read/writes to files without getting into locking or heartbeat issues like EVMS or something else.
posted by SpecialK at 9:08 PM on January 28, 2008


Best answer: Oh, and for this --

5. We are looking at either Zimbra or Scalix. We'd probably want to start with it on a Xen slice then migrate it to a full machine as our number of seats grows -- right now we're around 6 seats but may be scaling up to 50 before the year is up. I'm also interested in other possible mail options that 1) have Outlook support, 2) allow calendar sharing and 3) have mobile support. We'd prefer to run OSS, but are not entirely opposed to closed source if it'll meet our needs better.

I've recently been surprised by the stability and just plain ol' niceness of GroupWise now that it's moved to Linux. Yes, Groupwise is now on Linux... Novell OES, which is basically SLES 10. With mobile options. And calendaring. And XMPP-compatible chat. Clients are platform-agnostic and mobile clients work on both Windows and Palm. It's about as good as you can get, honestly, and it's supported to the nth degree by Novell. And it also plays nice with Xen... in fact, we're now virtual on a lot of our Novell services.
posted by SpecialK at 9:12 PM on January 28, 2008


Response by poster: thanks for all the answers, guys, and extra thanks for everyone that put in a quick vote, that helped tip us in a direction -- we're probably going to try starting with the HP DL360 and seeing how that works for us.

You might also look at getting a vendor-compatible SAN and putting the VMs on the SAN, that way you can add more boxes later if you need to without stretching too hard, and you can shift instances from box to box at a moment's notice.

I'd be interested in hearing more about how the logistics of this works. I'll also take a quick peek at groupwise.
posted by fishfucker at 8:22 AM on January 29, 2008


Best answer: also, forgive my ignorance, but what's the general idea with HDs here? is there an advantage/need to buying HP-branded HDs, or can we just shove third-party paired/matching SAS drives in there? Can we mix-and-match sizes with RAID mirroring if we have at least a pair of each? (ie, can we have two mirrored 100GB drives and two mirrored 50GB drives?).

I've never bought third-party drives, so I can't speak to that. I do know that the correct HP parts come with all the stuff required to slide right in and they work.

Yes, you can match two of one size and two of another. If you get the DL360 and want to use all six drive slots, there's a little cable adapter you'll need to buy to wire up the sixth drive bay.

I'd be interested in hearing more about how the logistics of this works.

Basically, you can set it up so each of the virtualized machines is stored on a SAN instead of being on one particular machine's hard drives. So let's say your Foo server is virtualized, and it's become so popular that it's using too many resources. If the Foo server is stored on a SAN, you can order another DL360 (or perhaps something ballsier) and then tell it that the virtual server you care about is on the SAN. Then you can shut down the old server, turn on the new server, and have a full upgrade with about 2 minutes of downtime. (If you spend on VMWare, you can change that 2 minutes to approximately 2 milliseconds, also, should that become a priority.)

That said, $10k doesn't allow for a SAN, so it's a bit academic. But if I was you, I'd try to get one in the budget as soon as possible.
posted by Tacos Are Pretty Great at 9:27 AM on January 29, 2008


Oh, and when you're adding RAM to the 360, you'll add in pairs, but you can use pairs of different sizes. so a machine with 2x1GB, and 4x2GB is fine.
posted by Tacos Are Pretty Great at 9:30 AM on January 29, 2008


If you spend on VMWare, you can change that 2 minutes to approximately 2 milliseconds, also, should that become a priority.

VMware's VMotion moves SAN-based VMs between machines without any downtime. And stoager VMotion allows you to move the VMDK file between SAN LUNs again without taking the VM offline.

That's the one advantage of spending the bucks on VMware over Xen - it is a much more robust product. Having said that, Xen works plenty fine.

I'd also suggest getting a SAN, but if your needs are limited it's not a necessity.
posted by GuyZero at 10:27 AM on January 29, 2008


Best answer: My small tips:

Disclaimer: I work for an IBM Business Partner. I'm also certified in VMware, XenSource (now Citrix XenServer) and Virtual Iron (another commercial Xen version.)
1. Should be designed to have Linux installed (preferred flavors are CentOS and Debian). Should support Xen. Unless there's a good argument that it's pointless, should have Intel processors that support VT.

To ensure compatibility driver-wise, choose a server that is certified to work with the Linux distribution you're going to use. Finding Debian supported servers will be quite hard, but Red Hat compatible servers can be found everywhere.
If it works with Red Hat Enterprise Linux, it should support CentOS perfectly as well.

For the Xen support, if your server supports RHEL 5, it should support Xen on CentOS as well. Most new CPUs support Intel VT/AMD V, but this usually only comes into play when you want to do full virtualization with hardware assists (ie. Windows), as opposed to paravirtualizing Linux guest where you won't use the VT extensions anyway.

2. Obvious: Hardware RAID. Must be rack-mountable. Space/power not a huge concern for now, but prefer under 2U.

Hardware RAID: Check what levels you need to support, and how much (battery backed) cache the controller provides. Also, keep your disk sizes and speed in mind, tuned to your configuration.
SAS/SATA does still make a tremendous difference as well. Believe me, I've benchmarked.

Do not underestimate the importance of operating expenses (power and cooling). Every quality vendor has power calculators available on their website.

3. We'll be buying two; one will be used to provide web/mail/etc services and will probably run a Xen hypervisor with all services being installed on Xen slices. The second one will be a dedicated file/database (likely MySQL) server. We're expecting to spend a little more on the database server (which will have additional disk space).

Make sure you use enough memory. I can't count number of machines I've seen with 8 cores (ie. 2 quad-core cpu's) but with only 2 or 4 Gb of RAM. Even with servers you can buy third party RAM from Kingston at a fraction of the price of the vendor.
Upside: It's cheaper, and it can even be supported/serviced by your server vendor. (I know that Kingston for example has got agreements with IBM so that a machine under IBM warranty also covers the Kingston memory)
Downside: They may not be as optimized, in that they lack error checking/correcting features or use more power. It may also cost you some bargaining power with your vendor (see below). It is worth the calculation though.

4. Pretty sure we want to go with a major manufacturer (Dell, HP, etc) because I assume this will make management simpler as we add more machines and mean replacement parts have generally better availability than a white-box or home-built. Ideally next time we buy a server (which would probably be before the end of the year), we'd just order another couple of whatever we went with this time.

I would of course suggest you to use IBM servers. (since I work for an IBM BP, we're probably Belgium's biggest "System x" parter)

Even with one of the major vendors, don't assume you'll be able to order the exact same configuration twelve months down the road. Even though the models stay about the same, there are quite a few changes in cpu types each year.
This should not pose a problem with regards to replacement parts. As long as you're covered by warranty / maintenance, replacement parts will be available. The price for this depends on the service level you order, but this warranty can be quite expensive. It is worth it when a component fails.


We can spend up to $10k.

Don't forget that most vendors can give you quite big discounts, even when you're "only" spending 10K. Do not compare list prices you find on the websites to rule out possible vendors on price. Ask for prices and negotiate.

Only posible downside, they don't have tons of internal storage (they can hold 6 little SAS drives), but if you need tons of storage, you probably want a SAN or a NAS anyway. (Tacos Are Pretty Great)

True, but given his budget that won't be an option right now I suppose.

VMware ESX is a customized Linux OS. (me & my monkey)
Blashpemy! VMware ESX is not a customized Linux OS. The "service console" runs RHEL, but the kernel is not, not, not based on Linux.

* I prefer the HP's 4-6 2,5 inch bays to the Dell's 2 3,5 (uandt)
Watch out: physically smaller drives usually carry a performance penalty!

* The Dell's do not play nicely with Debian (at least for sucky sysadmin me)
* Ah, what the hell I cant bother listing arguments anymore. The HP's were just nicer and I'll never buy dell again. And what Tacos said (uandt)


That's why I'd like to repeat this: check if the OS is listed as a compatible OS (ie. tested in the labs to work perfectly, driver wise). You won't usually find Debian on those certified lists. This does not mean it won't work. It does mean it's harder to bug support when it's not working.

And it's true, don't buy Dell. I may be negatively biased towards HP as well, having received the "big blue brainwash", but we can agree that Dell sucks, can't we?

Though we are moving to blades for a lot of stuff. If this is a project that will expand in the future, I'd recommend also checking out HP's blade systems as well. (jrishel)
Which ones? The P-class? The C-class? Or the one HP is going to introduce when their current design won't cut it again, abandoning their customers once again who've invested in pricey switch modules and blades that are no longer compatible.
Please check out the IBM BladeCenter, it really, really rules.

also, forgive my ignorance, but what's the general idea with HDs here? is there an advantage/need to buying HP-branded HDs, or can we just shove third-party paired/matching SAS drives in there? Can we mix-and-match sizes with RAID mirroring if we have at least a pair of each? (ie, can we have two mirrored 100GB drives and two mirrored 50GB drives?) (fishfucker)
You'll need to buy them from your server vendor, since you don't buy them "bare" like you'd buy a drive to put in your desktop, but they come in a holder with the specific connector for this vendor, allowing you to hot swap them and "click" them in without needing to connect cables and screws.

Although you know about it, watch the VT thing and triple check it. We set up a virtualization box recently and got burned when the sales guy made some assumptions about VT support.
We were using VMware not Xen, but it was still a huge headache to learn we had spent two days slogging through weird host OS issues (RAID drivers etc.) only to find that there was no way to use VMware on the box at all and that we needed to start over. (mrbugsentry)


Huh? What were you doing on VMware that required VT? VMware only uses VT when you're running 64 bit guests. When you're going to virtualize CentOS under CentOS, you're better of running a paravirtualized ("virtualization-aware") kernel, that won't use VT.

I wouldnt mix and match your drive sizes if you can help it, you'll have more flexibility that way, say you wanna go RAID6 or something. Otherwise, you'll have to create separate virtual disks for each pair. There's no real advantage with going with HP branded drives except for the sake of having warranty through them, they'll just be rebranded Seagates probably anyhow. (signalnine)

also, forgive my ignorance, but what's the general idea with HDs here? is there an advantage/need to buying HP-branded HDs, or can we just shove third-party paired/matching SAS drives in there? Can we mix-and-match sizes with RAID mirroring if we have at least a pair of each? (ie, can we have two mirrored 100GB drives and two mirrored 50GB drives?).

Buy HDs from your server vendor, for reasons outlined above. Decent RAID controllers can support different arrays with different RAID levels.


If you spend on VMWare, you can change that 2 minutes to approximately 2 milliseconds, also, should that become a priority.

VMware's VMotion moves SAN-based VMs between machines without any downtime. And strage VMotion allows you to move the VMDK file between SAN LUNs again without taking the VM offline.

That's the one advantage of spending the bucks on VMware over Xen - it is a much more robust product. Having said that, Xen works plenty fine.

I'd also suggest getting a SAN, but if your needs are limited it's not a necessity. (GuyZero)


This is the ideal configuration if you ask me, but the cost of the SAN and VMware will not be compatible with the 10K budget. Nowadays you can "XenMotion" with XenServer or "LiveMigrate" with VirtualIron as well, but VMware ESX is indeed much, much more mature than the Xen-based solutions.

So, in conclusion:
* Negotiate on prices with your hardware vendor
* IBM rules! (of course HP provides more than adequate stuff as well, but I can't really say that officially) Don't forget the Remote Supervisor Adapters (similar to Dell's DRAC and HP's iLO)
* RAM. RAM. RAM. Buy enough RAM.
* don't forget a backup solution!!
* buy enough RAM

Hope that helped a bit.
posted by lodev at 12:35 PM on January 30, 2008 [2 favorites]


Best answer: Watch out: physically smaller drives usually carry a performance penalty!

Woah there.

Given two 15k SAS drives, one 2.5 inch and the other 3.5 inch, I'd expect the 2.5 to show the following:

- similar latency
- faster seeks
- slightly slower sustained transfers
- slightly lower power use

That said, the disadvantage of slower sustained transfers is often offset by a single drive being less likely to be servicing multiple requests at the same time, due to the lower capacity.

In reality, there are performance advantages to each format, but you'd only likely care if you started beating the hell out of a SAN.

As for vendors, I've used (don't laugh) The Nerds and Tech On Web without issue. We had some negative experiences with PCSuperStore aka Erie Computers aka CompSource.

We started off using a value-added reseller, but ditched them when we reached a point where they didn't appear to add any value at all, as our internal expertise with the HP product line became equal or greater than the sales reps.
posted by Tacos Are Pretty Great at 12:58 PM on January 30, 2008


Oh, and I'd also point out that there are a few drive manufacturers out there who are putting 2.5 inch spindles in 3.5 inch cases for their smaller capacity 3.5 inch SAS drives.

As such, I really, really don't think the comment about size and performance being correlated in a particularly obvious way is true.
posted by Tacos Are Pretty Great at 1:00 PM on January 30, 2008


@TAPG: I stand (partially :)) corrected. I was making a storage configuration/calculation earlier today, and for that config the (physically) bigger drives did matter, but those numbers are not really relevant to this configuration. Blame it on me being tired (and several timezones closer to night)
It's true that nowadays the performance of the smaller, internal drives in servers is on par with 3.5 inch. Although, as with any benchmark/sizing, it's impossible to define one "ideal" configuration. It always depends on what it's going to be used for etc.
posted by lodev at 1:17 PM on January 30, 2008


Response by poster: wow, lodev! Thanks for the long and informative answer. If Debian is poorly supported, I'm certainly not married to it. We are currently running CentOS on all our 'production' servers (i'm just running debian as a desktop/dev platform, so I'm a little more familiar with it).

I do have a followup question:

When you're going to virtualize CentOS under CentOS, you're better of running a paravirtualized ("virtualization-aware") kernel, that won't use VT.

what benefits does paravirtualization offer in this situation? I want to make sure we go best practices here because we're probably going to be strictly running linux on all our machines because 1) I've heard Xen has poor windows support, and 2) because I don't think we're planning on running any services that would be windows only.
posted by fishfucker at 6:14 PM on January 30, 2008


Best answer: When you're going to virtualize CentOS under CentOS, you're better of running a paravirtualized ("virtualization-aware") kernel, that won't use VT.

what benefits does paravirtualization offer in this situation? I want to make sure we go best practices here because we're probably going to be strictly running linux on all our machines because 1) I've heard Xen has poor windows support, and 2) because I don't think we're planning on running any services that would be windows only.

Generally speaking, there are 4 kinds of "server virtualization". Here's how they compare (I hope this table won't be screwed up.):
+-------------------------+------------------------+------------------------+------------------------+
| Full virtualization     | Full virtualization    | Paravirtualization     | OS partitioning        |
| with binary translation | with hardware assists  |                        |                        |
+-------------------------+------------------------+------------------------+------------------------+
| VMware ESX              | Virtual Iron (always)  | XenServer, RHEL,       |  Virtuozzo             |
|                         | XenSource/RHEL/SLES    | SLES, ...              |                        |
|                         | when running non-Linux |                        |                        |
|                         | guests.                |                        |                        |
|                         | Vmware when running    |                        |                        |
|                         | 64 bit guests.         |                        |                        |
+-------------------------+------------------------+------------------------+------------------------+
| Runs on any CPU         | Requires Intel VT /    | Runs on any CPU        | Runs on any CPU        |
|                         | AMD V cpu              |                        |                        |
+-------------------------+------------------------+------------------------+------------------------+
| Runs unmodified OS      | Runs unmodified OS     | Requires modified OS   | Modified OS            |
+-------------------------+------------------------+------------------------+------------------------+
| Performance overhead    | Theoretically less     | Lower performance      | Low performance        |
|                         | overhead than binary   | overhead, because OS   | overhead               |
|                         | translation.           | is virt. aware         |                        |
+-------------------------+------------------------+------------------------+------------------------+
| Full isolation          | Full isolation         | Full isolation         | No full isolation      |
+-------------------------+------------------------+------------------------+------------------------+

So, theoretically VMware ESX carries the biggest performance overhead, but in reality the performance is usually better than the various Xen based hardware assisted solutions. (because the VMware hypervisor is about 10 years in the making, and much more mature).

The poor(er) Windows support of Xen you heard about is probably based on the fact that the virtual hardware drivers (like the VMware tools you install in a guest) are less mature, or because it requires CPUs with virtualization extensions.

For a purely Linux-based setup Xen should do fine. Paravirtualization means that the OS is aware it's being virtualized (that's why you use a special kernel in that case). This way, the OS and the hypervisor can communicate/cooperate. The OS knows it can't make the "hard to virtualize" instructions directly, and the hypervisor can provide much simpler virtual hardware to the guest.

(more on this at the 'pedia and on Slideshare)

(I could go on and on about this.)
posted by lodev at 1:47 AM on January 31, 2008 [2 favorites]


Response by poster: I could keep listening, if you've got other information to share -- I've had a lot of trouble finding plain english comparisons and explanations, and my background in IT is pretty limited.

Plus it seems rare that people want to go out on a limb and say "you want to do X" because it almost always starts a religious war with people who think you should do Y. Still, either way it's valuable information for someone with no experience.

I'll definitely check out the wikipedia and slideshare stuff. Thanks for all your help!
posted by fishfucker at 9:57 AM on January 31, 2008


« Older Best Global Warming Real Estate?   |   Private party venue in Chicago Newer »
This thread is closed to new comments.