Is there any way to resume an interrupted file copy in Mac OS X?
July 14, 2007 1:34 PM
Is there any way to resume an interrupted file copy in Mac OS X?
Specifically, I'm trying to download (in the Finder) a huge file from a Samba share that keeps getting disconnected before completion because the network connection is a bit spotty. Am I overlooking functionality built into OS X, or is there an application that would be helpful?
Specifically, I'm trying to download (in the Finder) a huge file from a Samba share that keeps getting disconnected before completion because the network connection is a bit spotty. Am I overlooking functionality built into OS X, or is there an application that would be helpful?
That's a command line tool, btw. Not a finder app.
posted by about_time at 1:38 PM on July 14, 2007
posted by about_time at 1:38 PM on July 14, 2007
rsync won't help much for one big file over samba -- it'll want to read from the start so it can verify checksums, so be sure to use it over SSH or using a dedicated rsyncd.
If you have some control over the remote side you could set up a web or FTP server to serve the file in a form where there are more resume-friendly applications.
Another alternative is dd. Again, a command line application, but it won't need any additional services; find how big your local copy of the file is and do:
dd if=/path/to/remote/file of=/path/to/local/file skip=1234 seek=1234 bs=4k
Where 1234 is a value less than or equal to however many bytes you've got. It'll then copy in 4k chunks (that's bs=, block size, default is 512 bytes which is a bit small) from there -- seek= sets the position in the output file and skip= sets the position in the input file.
posted by Freaky at 2:00 PM on July 14, 2007
If you have some control over the remote side you could set up a web or FTP server to serve the file in a form where there are more resume-friendly applications.
Another alternative is dd. Again, a command line application, but it won't need any additional services; find how big your local copy of the file is and do:
dd if=/path/to/remote/file of=/path/to/local/file skip=1234 seek=1234 bs=4k
Where 1234 is a value less than or equal to however many bytes you've got. It'll then copy in 4k chunks (that's bs=, block size, default is 512 bytes which is a bit small) from there -- seek= sets the position in the output file and skip= sets the position in the input file.
posted by Freaky at 2:00 PM on July 14, 2007
Another alternative is dd. Again, a command line application, but it won't need any additional services; find how big your local copy of the file is and do:When I try this, the output file immediately balloons to fill all of the available space on the hard drive, and then dd stops. Any idea what's going wrong?
dd if=/path/to/remote/file of=/path/to/local/file skip=1234 seek=1234 bs=4k
Where 1234 is a value less than or equal to however many bytes you've got. It'll then copy in 4k chunks (that's bs=, block size, default is 512 bytes which is a bit small) from there -- seek= sets the position in the output file and skip= sets the position in the input file.
posted by Ø at 2:45 PM on July 14, 2007
Don't use dd; it's way too fiddly. The curl command is the easiest way to do this. In the terminal, all you need to do is type
posted by boaz at 2:59 PM on July 14, 2007
curl -C - -O file:///Volumes/{Samba Share Name}/{path}/{to}/{file}
with the right file path. The easiest way is to type the "file://" part and then drop the SAMBA file onto the terminal so its path will be pasted in. The "-C -" tells it to continue where it left off in copying the file and the -O flags tells curl to give the file the same name locally as on the server. Those are both capital letters. Keep in mind that it will copy it into your working directory in the terminal, which will be your home folder by default. Then, if the the connection gets cut off, just remount the share and rerun the exact same curl command and curl will pick up where it left off.posted by boaz at 2:59 PM on July 14, 2007
I think I've figured out the problem: 1234 is a value in blocks, rather than bytes. Now it seems to be working correctly.
Thanks for the ideas.
posted by Ø at 2:59 PM on July 14, 2007
Thanks for the ideas.
posted by Ø at 2:59 PM on July 14, 2007
Curl turned out to be the best solution. No fiddling required, and it even gives transfer statistics -- perfect!
posted by Ø at 3:17 PM on July 14, 2007
posted by Ø at 3:17 PM on July 14, 2007
Oh, yeah, blocks, duh. Always read manpages carefully ;)
Good catch with curl, forgot it supported file://
posted by Freaky at 6:10 PM on July 14, 2007
Good catch with curl, forgot it supported file://
posted by Freaky at 6:10 PM on July 14, 2007
This thread is closed to new comments.
Perhaps the best way to explain the syntax is some examples:
rsync *.c foo:src/
this would transfer all files matching the pattern *.c from the current
directory to the directory src on the machine foo. If any of the files
already exist on the remote system then the rsync remote-update proto-
col is used to update the file by sending only the differences. See the
tech report for details.
posted by about_time at 1:38 PM on July 14, 2007