Skip

'mo bits 'mo money
September 22, 2010 8:14 AM   Subscribe

Are we paying too much for 10-gigabit ethernet buildout for a cluster?

We are working to reduce io bottlenecks on a 30-node LSF based cluster talking to netapp nas. the clients are all on gig-e, and with 8 cores each are capable of handling much more than the ~70MB/sec of throughput they get now. We are looking to simply switching up to 10GB ethernet and a roughly 8x amount of increased io, but the costs seem outrageous. Our sysadmins are saying it will cost around 4K/port end-to-end (including switch, nics, fiber adapter). I'm not a network guru but this seems high based on parts I see on other corporate websites for equipment. One area of contention is wether we need fiber or copper. My understanding is CX4 is fine and has all the performance of fiber as long as you are under the 45-foot limit.

So... 1) is that pricing nuts and 2) is copper okay for lower distances?

extra credit for links to stuff to help us get there!

thanks!
posted by H. Roark to Computers & Internet (9 answers total)
 
I've recently been pricing out some gig and 10 gig stuff, and 10 gig is crazy expensive - the fairly cheap HP gear I'm looking at is $800 for a single transceiver. Could you do a nic-teaming setup where each client gets two or even three gig ports? Maybe a few 48-port gig switches with 10-gig uplinks, so you can group clients onto 3 switches with 3 ports each and a dedicated 10-gb uplink to the NAS?
posted by pocams at 8:23 AM on September 22, 2010


Listen to the engineers, it has to be supported after it's purchased.

I'd get the NAS to 10G first and then think about moving the clients to 10G if you need more I/O, but I doubt the NAS can push 10G.

Don't bother with the copper unless it's Cat6A.
posted by iamabot at 8:42 AM on September 22, 2010


Don't bother with the copper unless it's Cat6A.
Can you clarify this? I was looking at CX4 too...
posted by yeoz at 8:50 AM on September 22, 2010


Luckily the prices on Gig-e stuff has fallen through the floor with the introduction of 10-Gb. Team it up!
posted by tmt at 8:51 AM on September 22, 2010


Though, just from the stuff I was looking at myself, it's definitely the 10gig switches that are the major expense in this. 10 Gbe nics aren't really that much more expensive than fancy-pants 1gb nics that have features like TOE.
posted by yeoz at 9:01 AM on September 22, 2010


I worry about the distance estimations, in my mind it does not make sense to expand or upgrade an infrastructure with CX-4. 10GbaseT has similar performance as CX-4 (slightly higher latency) but has dramatically better reach. 10GbaseT leverages Cat6A for it's cabling infrastructure.

Depending on the layout of the datacenter and the cable management infrastructure it is possible to approach 15M cable runs with only 3-4 racks. Nearly every single time we've upgraded and infrastructure to 10G and met the specified requirement and growth projections we've had to come back in later and retrofit because additional capacity was needed.

Fiber in my experience is the best way to go, followed by 10GbaseT and Cat6A, then Cx-4 if you are dealing with a really small footprint.
posted by iamabot at 9:05 AM on September 22, 2010


Also, if you're already on a copper plant, and it's cat6A, it makes sense to keep it, you've already made the investment in the copper plant and with Cat6a there's no need to replace it.
posted by iamabot at 9:16 AM on September 22, 2010


If you're not yet using jumbo frames on your 1 Gb network, that is where I would start.

Next, I would take a look at upgrading the NetApp to 10 Gb and bonding pairs of 1 Gb lines to the servers.

Be very careful with 10GbaseT, it has higher latency and much higher power requirements. I've used both fiber and CX4 with good results, although the distance limitations with CX4 are irritating, to say the least. The nice thing with mixing fiber/CX4 is that you can reuse the same SFP+ socket cards in the netapps/servers.
posted by alienzero at 9:25 AM on September 22, 2010


Nthing the jumbo frames and making sure your switches can offer non blocking gigabit performance.
posted by iamabot at 1:01 PM on September 22, 2010


« Older (recession-is-over-yet-no-one-...   |  Cat cohabitation question. Pl... Newer »
This thread is closed to new comments.


Post