Two different GPUs in one machine -- what other hardware do I need?
April 1, 2019 4:03 PM   Subscribe

I would like to put two big, different GPUs in my computer, but I think to do that I will probably need a different power supply and cooling system (and who knows what else). Help me get a grip on how to do this.

I already have a 1080Ti in my current computer and, due to good fortune, also a 2080Ti on the way.

My purpose for these GPUs is machine learning, not gaming. So I promise having two GPUs makes sense, and these otherwise overpriced top-of-the-line models are actually measurably better.

I want to build a machine with two GPUs, with the 2080Ti as primary (hooked up to display and actually used for rendering graphics), with the 1080Ti used only for number-crunching. Not using SLI.

In terms of hardware, what kind of motherboard, cooling, and power do I need? What even should I be looking for? How should I figure it out?

This is kind of a weird use case; pcpartpicker.com won’t even let me make a machine with two different types of GPU, probably because there’s no point if you’re gaming. But I know it can be done and that it will be useful for me; I just don’t know what hardware I’ll need to avoid things melting or crashing.
posted by vogon_poet to Technology (10 answers total) 2 users marked this as a favorite
 
https://www.reddit.com/r/buildapc/ is a good resource for this.
posted by humboldt32 at 4:27 PM on April 1, 2019


This is the kind of thing where I'd start with Supermicro. They have some motherboards optimized for multiple GPUs. On each motherboard page, there are links to matching chassis or full server configs. For example, the X11DPG-QT motherboard goes with the 7049GP-TRT workstation, which has plenty of space, power and cooling. (I picked that combination at random from the GPU motherboard page. You'll probably make a different choice after some research.)

You'll have to find a reseller. They've got a list of system integrators who can help you design a system but might want to deal with you if you're not a business, but a lot of Supermicro's stuff is also available on, e.g., Newegg.
posted by clawsoon at 4:44 PM on April 1, 2019


On second thought, those are pretty expensive systems. If you don't want to spend that kind of money, you can at least figure out your required power pretty easily. Get the wattage from the GPU spec sheets: Looks like 260W for the 2080Ti with 8 pin + 8 pin supplementary power connectors, and 250W for the 1080Ti with 6 pin + 8 pin supplementary power connectors.

Add your max CPU power - let's say 250W, though that's probably overkill - and you'll get a total system power of ~750W. Get a 1000W PSU to make sure you've got headroom. It'll need 3 x 8 pin and 1 x 6 pin supplementary power connectors.

It looks like both GPUs are PCI-e x16, so you'll need a motherboard with at least two of those slots.

The Supermicro motherboard matrix might help you find a cheaper option. A quick browse (again, you should spend much more time looking than I did) finds... let's see... the X11SAE-F motherboard (2 PCI-E 3.0 x16), which pairs with the SC743TQ-1200B-SQ chassis. The included 1200W PSU appears to have enough PCI-E supplementary power connectors to service your GPUs.

On a closer look at the X11SAE-F motherboard specs, it looks like the 2 PCI-E 2.0 x16 slots will only run at 16+nothing or 8+8, so that's less than ideal for you. You'll have to look through a few more until you find one that has at least two full-speed x16 slots and doesn't cost a ridiculous amount.
posted by clawsoon at 5:36 PM on April 1, 2019


Best answer: The number of PCI-E lanes available to you is a function of both the processor and motherboard, so if you want to run both cards at PCI-E x16, you'll need a processor that supports more than 32 PCI-E lanes (and a motherboard to match). This configuration puts it squarely in the realm of workstation/server components requiring either a Xeon or Ryzen Threadripper that supports it; if you have a desktop-class processor, it's likely that 8+8 is the best you'll be able to do.
posted by Aleyn at 5:54 PM on April 1, 2019


Response by poster: That settles it; I'll have to either get a whole new machine, or fall back on plan B and put the 1080Ti in an eGPU enclosure, which would be nice to have...
posted by vogon_poet at 6:10 PM on April 1, 2019


Note that putting one of them in an eGPU enclosure will not get you around the limit that Aleyn mentions, and you will also be up against limits of the external interface.

You would most likely be looking at a connection over a Thunderbolt 3 port, which basically serves as an external 4-lane PCI-e interface. So you will only have 4 lanes to the card, not a full 16.

And the Thunderbolt controller will be using up 4 of your CPU & motherboard's lanes in order to provide that external port.

You're very likely to be better off putting both GPUs in your existing motherboard.
posted by automatronic at 6:26 PM on April 1, 2019 [1 favorite]


PCI-E lanes aren't everything. Even with gaming SLI use the performance difference can be minimal with fewer lanes.

I'm a ML neophyte but it seems to me that ML use is either disk-I/O-bound (if your dataset won't fit in VRAM) or GPU-bound (if it will) without a lot of transferring back and forth through the bus like you see in gaming use.

This guy seems to know what he's talking about and claims the number of PCI-E lanes has almost no effect on ML performance.
posted by neckro23 at 7:50 PM on April 1, 2019 [4 favorites]


Indeed, 4 x PCI-e 3.0 lanes is still a massive amount of bandwidth - it works out to 4GB/s.

Older laptops can run GPUs very effectively for gaming on as little as a single PCI-e 2.0 lane (0.5GB/s), snuck in through an ExpressCard connector or an internal mini-PCIe or M.2 slot.

It only matters if your workload depends on shifting large amounts of data on and off the card quickly from somewhere even faster - i.e. basically from main memory.

And much of the work that goes into implementing numerical computing on GPUs is focused on partitioning work into chunks where all the data needed fits into the GPU memory - precisely to avoid being affected by this bottleneck.

So depending on what you're doing and what tools you use, it may not be an issue for you at all.
posted by automatronic at 2:04 AM on April 2, 2019


Response by poster: The eGPU enclosure would be something to plug my laptop into for prototyping purposes.

I do have to shift a lot of data that fits in RAM back and forth off the GPU constantly (high res images etc.) and conventional wisdom is that this is the major bottleneck for a lot of machine learning. But I suppose I should benchmark that before deciding.
posted by vogon_poet at 7:38 AM on April 2, 2019


I have a similar use case to you and in my situation just plugging both gpus into my motherboard pretty much worked exactly the way I wanted.
I plug my monitor into the gpu that I want to to render the display.
When I am running code in tensorflow, it's pretty straightforward to choose which GPU to use.
posted by vegetableagony at 11:37 AM on April 5, 2019


« Older Why did this recipe fail?   |   Patience + Perseverance Newer »
This thread is closed to new comments.