Quad Xeon vs. Core2 Quad?
February 12, 2008 6:46 PM   Subscribe

Quad Xeon vs. Core2 Quad?

I've been charged with building a system for an electrical engineer. He's been given a recomendation of a quad core Xeon processor.

I don't have a lot experience with server processors.

I understand that server processors scale better in multiprocessor environments and also tend to have higher heat tolerances, but are there any other advantages to bringing a Xeon to the desktop (especially considering it's higher price)? Any caveats?

posted by Asef Jil to Computers & Internet (14 answers total) 3 users marked this as a favorite
It depends entirely on what the engineer uses. I'm assuming, though, that he will be doing a lot of computationally intensive simulations. If you can give us an idea of the tool, I might be able to help a bit more (does Cadence, Mentor Graphics, or Synopsis sound familiar?).

If it were my choice, I would go with the Xeon chip. They generally have the fastest caches and are coupled with a faster memory bus. To me, in this specific case, it's worth it.
posted by spiderskull at 7:03 PM on February 12, 2008

I should mention that I've never done side-by-side comparisons. My only experience is with Cadence's ASIC design toolset on a Xeon system, and it was blazingly fast compared to anything else I'd seen. So take what I say as personal opinion with a healthy sized grain of salt.
posted by spiderskull at 7:04 PM on February 12, 2008

I had the luck of picking up an IBM intellistation with two dual core Xeon chips each running at about 3GHz for a song at a surplus sale. There was absolutely nothing I could do to bog down that computer. I could have about 35 tabs open in firefox, 2 or 3 word documents open, an excel document open, while watching a HD video and installing a new program and the system would not even hiccup when I switched programs.

I miss that computer every time my current system comes to a standstill because I opened a PDF.
posted by 517 at 7:26 PM on February 12, 2008

Response by poster: OP here. I don't know the tool that the EE is using beyond that it's capable of utilizing multiple cores. He designs transmission towers, if that's any help. The current simulations he's running take his C2D 45 minutes to run. He's looking to halve this time.
posted by Asef Jil at 7:30 PM on February 12, 2008

The Xeon is multi-threaded. With the proper version of the operating system, a quad-core Xeon looks like 8 processors. Running both threads of a Xeon full out gains you about another 50% compute speed, varying as a function of just what you're trying to do.

If he's already interested in multi-core for solving his problem, it means he's using software that will take advantage of multiple processors. Usually such software can use as many processors as are available, and in that case it will run substantially faster on the Xeon than on the Core2, all other things being equal.
posted by Steven C. Den Beste at 7:43 PM on February 12, 2008

I used to own a workstation which had two Xeons in it. Using WinXP and with the proper BIOS settings, it looked like 4 processors to the OS. However, 4 was the limit with XP Pro; to use more than that you had to go to one of Windows server versions.

I don't know if they've loosened that number further with Vista.
posted by Steven C. Den Beste at 7:46 PM on February 12, 2008

You're not the first person to wonder about Xeon versus Core.

Here's an Ars Technica thread (generally not a bad forum, although beware of the fanboys) with some good information in it.

Apparently the Xeons offer a faster FSB but use FB-DIMMS (FB for "Fully Buffered"). They are to regular DIMMS what PCI Express is to PCI; they're a serial rather than parallel interface, and this has its benefits and hazards. The chips are more expensive, for one, even compared to registered (ECC) DIMMs, and they introduce some additional latency. Frankly I think eventually FB-DIMMS are going to go the way of Rambus, but that doesn't mean you should necessarily avoid them -- just buy the RAM you need right away; don't depend on many bigger/faster/cheaper ones in the pipeline.

Bottom line is that Xeon and Core use a different architecture; if your engineer spec'd Xeon and you give him a Core workstation, you'll probably never hear the end of it. Although I'd personally have trouble justifying the expense myself, without knowing exactly his workload and rationale for asking for Xeon, there's no way to say it's unreasonable.
posted by Kadin2048 at 8:06 PM on February 12, 2008

FB-DIMMs are a pretty good system, actually. They add some latency, because the access to them is 'packetized' -- bytes are squirted in packets over a serial bus, kind of like a super-fast Ethernet. Writing or reading any particular byte will usually take longer, because the packetization adds overhead.

In exchange, you get much, much higher memory bandwidth, and the ability to both read from and write to memory at the same time, which a normal memory bus can't do. This is really good for a multi-processor machine, because it's much less likely that the access from one CPU will stall the others. If a few threads are writing like crazy, and a few threads are reading like crazy, they can all operate to the full potential of the memory bandwidth.

The upshot is that for things like games, FB-DIMMs and Xeons are a little slower, because games have highly random data access patterns, and tend to load a chunk of code, run it out of cache as much as possible, write a result, and then go grab another chunk of code. This works best with the standard DDR2 and DDR3 memories.

But, for things like intensive multithreaded computation, which is often running the same code on a very great deal of data that needs to stream into and out of the processors, FB-DIMM wins by a mile, because threads don't stomp on each other. Most computations are bandwidth-limited, rather than latency-limited, and FB-DIMMs are enormously better for that purpose.

You also have the advantage of easily being able to expand to very large amounts of memory; because of the way FB-DIMMs work, you can hang a lot more of them off a given bus. The Mac Pro, for instance, uses FB-DIMMs, and will be expandable to 64 gigs of RAM if and when 8 gig FB-DIMMs come out. (it has 8 RAM slots.)

With the Mac Pro, you get the best overall speed if you populate half the banks, 2 each on both of the expansion cards. If you get a system that has more than 4 banks, consult the manual before buying to find out what's best for speed. As you add DIMMs past a certain point, the FB bus slows down a bit, as it sends signals intended for chips 3 and 4 through chips 1 and 2... each chip has to store and forward the data, so going past 2 on a given bus adds a little latency. This probably won't be terribly noticeable, but the effect does exist and you should be aware of it.


Oh, one more thought: the Mac Pro is one of the cheapest of the Xeon workstations available, but the Apple-supplied drivers for the machine won't work with anything but 32-bit XP. If you don't mind a lot of fussing to download and install drivers manually, and you can live without much XP/Mac integration, the Mac Pro can be a pretty good XP workstation on the cheap. Just don't buy the RAM from Apple. OWC is a good source.

If you can afford it, though, get the workstation from somewhere like Dell. You'll pay more for it, but it will come with all the right drivers and everything set up and ready to go. The Mac would only be good if you're on a very tight budget and have time to invest.

Well, the Mac would be good in one other situation: if your software also runs on OSX. If it does, well... the current 8-core Mac Pro is a kickass deal in terms of bang per buck, and should be high on the short list.
posted by Malor at 9:02 PM on February 12, 2008 [1 favorite]

Oh, and I don't think FB-DIMMs are going to die, because they still use the same fundamental memory technology on the actual DIMMs... DDR2 in the current generation. They just add an extra glue-layer chip to handle the packetization of the FB bus. It's not like the RAMBUS crap, which was totally different and had the ridiculous royalties... rather, it's the regular stuff on a bus that's better for high-bandwidth applications.
posted by Malor at 9:05 PM on February 12, 2008

It just occurred to me that I should emphasize something critical about the Xeon threading.

Microsoft, in its infinite wisdom, has placed a limit of 4 on the number of processors that you can use with XP Pro. That doesn't mean "4 CPUs". That means "4 threads".

You control whether threading is enabled with BIOS settings. If you take a quad-core Xeon and tell it to enable threading on all processors, and run it with XP Pro, what will happen is that it will use the primary and secondary threads on two of the four processors, and not use the other two processors at all. Which means you'll be wasting half the hardware, and your performance will be worse.

To use all 8 pseudo-processors, you cannot use XP Pro. You'd have to use Windows Server, at least in terms of that generation of OS from Microsoft. I do not know what they've done about that with Vista, however; it's possible that Vista will permit a larger number of processors without requiring you to go to a server license.
posted by Steven C. Den Beste at 7:10 AM on February 13, 2008

As usual, SCDB shows up and announces, in terms of absolute certainty, something that's totally wrong. No other poster I'm aware of is consistently wrong more often, especially not when speaking with suth authority.

Microsoft tracks the difference between DIES and CORES. You can use any two dies: XP will drive any number of cores, whether real or hyperthreaded, on those dies. From Microsoft's own page on the matter:

Windows XP Professional can support up to two processors regardless of the number of cores on the processor. Microsoft Windows XP Home supports one processor.
posted by Malor at 8:19 AM on February 13, 2008 [4 favorites]

"Bottom line is that Xeon and Core use a different architecture"

No they don't. Older Xeons are Netburst based (the same architecture as the Pentium 4) and took a bit of time to catch up, but modern ones, and especially anything quad core, is basically a Core 2.

Hyperthreading was mostly of use on Netburst because they made atrocious use of execution units and had very high latency, which could partly be hidden by switching thread contexts. It isn't supported by newer Xeons, though will make a return in Nehalem. If you want 8 cores, you get a pair of quad core CPUs, or 4 dual cores.

That's one of the reasons to go Xeon; they're your only choice for multi-socket Intel hardware, since they have the necessary hardware and firmware for synchronising with remote CPUs. Workstation and server class hardware also brings with it things like ECC memory; very important when you're counting on 4GB+ of memory to Just Work, though this is more to do with the motherboard chipset than the CPU.
posted by Freaky at 10:07 AM on February 13, 2008

The Xeon is multi-threaded. With the proper version of the operating system, a quad-core Xeon looks like 8 processors.

None of the currently offered Intel Xeon CPUs have hyperthreading. No quad-core Xeon has ever been hyperthreaded, although hyperthreading is planned for one of the future core designs (as Freaky mentions above).

As Freaky also mentioned, all currently offered Xeons are in fact Core 2 based.

Current Xeons have one major advantage over Core 2 Quads, which is more cache. I think you will also have to buy a motherboard with an FB-DIMM only northbridge for Xeons, which is bad because FB-DIMMs are very much more expensive and less power efficient, but good if you want ECC (despite what Freaky says, regular DDR2 SDRAM is pretty reliable even in huge quantities and is unlikely to corrupt your data).

Can we please get SCDB banned from AskMe? Please? How is it remotely possible to trust him on anything he writes here?
posted by azazello at 2:04 PM on February 13, 2008 [1 favorite]

Memory might be "pretty reliable", but I think most people (especially a computer professional who depends on correct operation, soft failures and rapid fault diagnosis) would prefer the first sign of a memory problem to be a log message identifying the faulty DIMM (or if it's really bad, a system panic before any real damage is done), and not a corrupt filesystem and broken backups because one bad bit in 68 billion had gone undetected for weeks.

If it's a machine for doing engineering work, I'd say ECC's a no-brainer (doesn't need to be FB-DIMM; you get ECC DDR2 too, Registered and Unbuffered). If it's an overpowered solitare and email machine, then sure, maybe go without if it really adds that much to the price.
posted by Freaky at 5:14 AM on February 14, 2008

« Older Help me fix my Windows XP   |   Help me take my supplements correctly! Newer »
This thread is closed to new comments.