Will simultaneous network connections increase transfer speed?
February 1, 2008 1:36 PM   Subscribe

In transferring files between a mac pro and a macbook can I increase transfer speed by connecting the two via firewire, ethernet AND wireless simultaneously?
posted by unofficialsquaw.com to Computers & Internet (11 answers total) 1 user marked this as a favorite
 
Nope. That's called load balancing and it won't happen by default. It will just use one connection. Firewire is probably the fastest but I dunno exactly how that works with Macs.
posted by GuyZero at 1:43 PM on February 1, 2008


Seconded, the computers will pick one connection and use it. The Firewire will probably be the fastest for the transfer, the wireless will definitely be the slowest.
posted by McSly at 1:48 PM on February 1, 2008


The gigabit ethernet should be the fastest, actually (you have to use a gigabit hub or just connect the two directly with a cable).
posted by zsazsa at 1:50 PM on February 1, 2008


recent macs have auto sensing ethernet, meaning you don't need a crossover cable

by recent i mean roughly the last 5 years

Gb ethernet would be fastest, then FW800 (not on MacBooks irc) then FW400, then wireless

if it's a huge amount, start it copying then go to work, the store, or bed.
posted by KenManiac at 1:57 PM on February 1, 2008


Despite being a Mac Owner and fairly tech savvy I've never dealt with this before, but would each connection show up as some new form of a shared drive?

If so couldn't you actually get some increase by copying separate files via separate connections?
posted by bitdamaged at 2:15 PM on February 1, 2008


You could try installing firehose ...
posted by nobeagle at 2:36 PM on February 1, 2008


Go over gigE or Firewire, like others have suggested. However, if this transfer is a lot of files I would suggesting mastering use of rsync. rsync will make it much easier to restart the copy process if it bails in the middle.
posted by chairface at 2:47 PM on February 1, 2008


I would only use rsync for the second attempt. It's pretty slow compared to tar (over a pipe).

How big is the data you're transferring?
posted by popechunk at 5:44 PM on February 1, 2008


bitdamaged - yeah, that should be doable. you'd get 3 distinct target IP addresses, and just copy parts of the whole to each of them. (though you might have to manually configure each link to it's own network - arp might get confused if they're all in the same 169.254/16 autoconfigured address block)

popechunk - out of curiosity, how are you tarring over the network? using socat, or something else? odds are, though, that if you're finding rsync to be slow then you're doing it wrong. by default it'll run over ssh (so encrypted), but unless your endpoints are really slow you shouldn't have a problem with them being CPU-bound. alternatively you can just use the native rsync protocol which runs direct on top of TCP. the only inherently slow bit of rsync is building the file list before transfer, which can be annoying if you have many, many, many files to copy.

OP - assuming both devices have GigE (dunno about the MacBook) you're probably better off just doing it over that - the time saving of having a load ballancing config is unlikely to be a win compared to the time spent futzing around to make it work.
posted by russm at 8:44 PM on February 1, 2008


Use gigabit ethernet, that should be your best bet I think. As for load balancing your connections, first see if you can saturate your gigabit connection first. Even allowing for 10% overhead, that will still be around 115 megabytes per second. I highly doubt that your harddrives (especially the one in your laptop) can read and write that fast, sustained.

Your bottleneck is not your link, but the harddrives. (Unless you've got some fastdisk setup like a SCSI disk or two or a RAID array that we don't know about.)
posted by Brian Puccio at 9:43 AM on February 2, 2008


popechunk - out of curiosity, how are you tarring over the network?

I generally use a variant of:

tar cfpb - 1024 DIRDNAME | rsh host '( cd /some/dir ; tar xfp - )'

You can use ssh, but it's slower. I guess netcat would work. You can break this up into many separate jobs by running a separate tar for each dir you're copying. You can spread it over multiple destination interfaces by changing the hostname (or IP) you're rsh'ing to for each interface. You can play with raising the block size. They key is to have as many of these as possible, and to make sure that you're completely saturating whatever the slowest part of the transfer (be it disk or network or whatever).

By way of qualification, I've migrated numerous SANs between data centers.
posted by popechunk at 6:18 AM on February 4, 2008


« Older Help with Primus' incoming text messaging address   |   OSX: What should I put on my really fast RAID0... Newer »
This thread is closed to new comments.