Encrypt external hard drive on Ubuntu
June 1, 2018 12:31 PM   Subscribe

How can I encrypt a 200GB folder on an external hard drive?

I feel like this should be easy, but it's been surprisingly hard.

I run Ubuntu 14.04. I have a folder on my local hard drive containing many thousands of files, totaling around 200GB. I want to copy this folder to an external hard drive and encrypt the folder. What's the best way to do this?

I tried using VeraCrypt and a plain encrypted .zip file, but both methods choked with this volume of data.
posted by morninj to Computers & Internet (7 answers total) 1 user marked this as a favorite
 
Sounds like you should be able to use VeraCrypt to create a 200GB container and copy the files into it.

Wow, somehow missed the last line of your post...

It does seem like it should do what you need though. The tutorial walks through creating a container and mounting it. Once mounted, then you can just copy/move the files you into it.
posted by hankscorpio83 at 12:56 PM on June 1, 2018


Veracrypt is the answer. The initial creation of the container takes a goodly amount of time, especially on an older machine (likely what you have if you're running 14.04, yes?). The initial data transfer will take awhile too. After that it should be as speedy as your connection to the external disk is.

Encfs would probably be much speedier, but has caveats (the files are encrypted and obscured, but you can see the individual file sizes; don't lose or delete the key file or you won't be able to retrieve the files). Gnome Encfs Manager or Cryptomator/ are convenient GUIs if you go in that direction.

Both of these may be a little confusing the first time you use them. After you read or watch some tutorials, try them out with small batch of files to make sure you understand what is happening before going full-bore on 200 GB!
posted by quarterframer at 1:24 PM on June 1, 2018


Make an empty file for your encrypted data:
dd if=/dev/zero of=/path/to/file/on/external_drive bs=1M count=200000

Set it up as a loop device:
losetup /dev/loop0 /path/to/file/on/external_drive

Set up said loop device as an encrypted device:
cryptsetup -y luksFormat /dev/loop0

Mount encrypted device and add a filesystem:
cryptsetup luksOpen /dev/loop0 encrypted_thing
mkfs.ext4 /dev/mapper/encrypted_thing

Mount the filesystem:
mkdir -p /mnt/encrypted_folder
mount /dev/mapper/encrypted_thing /mnt/encrypted_folder

Copy all your files over.

When you’re done, unmount and luksClose it:
umount /dev/mapper/encrypted_thing
cryptsetup luksClose encrypted_thing
posted by retypepassword at 1:24 PM on June 1, 2018 [2 favorites]


retypepassword has just what I would do (and do in some cases) unless I just went and made the whole external drive encrypted (which works just about the same way).
posted by zengargoyle at 2:06 PM on June 1, 2018 [1 favorite]


Previous answers assume like I did that you want to access the files in the external hd the same way you now do in your local disk. If that's the case, seconding retypepassword and zengargoyle.

Encrypting the whole external disk is easier but if that's not desirable, don't be stingy when creating the empty file where you'll put the encrypted filesystem, if you have 200 GB of files then make a 200 * 1.25 = 250 GB loop device.


OTOH, if you want an encrypted backup to be used only if your local copy is damaged or lost, the easiest and most portable option is to tar the folder then symmetrically encrypt with GnuPG, perhaps compressing if the data justifies it, and move the encrypted file to the external hd.

Encrypt the whole folder:
tar -c FOLDER_TO_ARCHIVE | gpg -o ENCRYPTED_FILE.gpg -c -

Decrypt and extract:
gpg -d ENCRYPTED_FILE.gpg | tar -xf -

You'll be prompted for the password when executing both of the above commands.

If you already use GnuPG and have a public/private keypair, you may replace the -c with -e in the encrypt command to use your keypair instead of a password.

Seconding this, too:
try them out with small batch of files to make sure you understand what is happening before going full-bore on 200 GB!
posted by Bangaioh at 3:02 PM on June 1, 2018 [1 favorite]


I was going to suggest gpg and tar but Bangaioh beat me to it. I will add a couple of suggestions:

1. If the external HD was pre-formatted, ensure that the filesystem it was formatted to can handle large files. I always just format mine to ext4 to avoid the headaches; you may want to do that as well.

2. I always immediately decrypt the tar file after it's been encrypted to confirm that I can. I also use 'cmp' to confirm that they're identical, although that's probably paranoia on my end.

And of course, make sure you don't lose the password.
posted by suetanvil at 9:19 AM on June 2, 2018


I was ready to grar at gpg+tar as a really bad idea until...
if you want an encrypted backup to be used only if your local copy is damaged or lost
The *only* and *lost* is the important bit. If you ever want to do something else (*damaged*), you'll begin to hate yourself+tar. This would be ranty but goes back to 1989 and carrying half a dozen of those big reel-to-reel tapes you see in movies on each forearm like some sort of techno-popeye. And goes to re-writing decades old SunOS+tar+tapes based incremental backup sofware to work with GNU/Linux/tar and petabyte robot armed tape machines.

So in the end, the big-file+loop+luks+mount leaves you with a big file that you can just as easily gzip and store away or upload somewhere with all the same advantages as gpg+tar. But with the benefits of *much* easier to do *anything* more complex than just un-dumping the whole backup.

If you do go gpg+tar, the one other simple thing you can do to make life easier in the future is to do something like:
find ./plot -type f -exec sha512sum "{}" \; > 0sums
And then you have a file that is both a TOC of the archive and checksums.

Then you can both keep those 0sum files around for future reverence and include them in the actual archive. That's sorta the phase-1 of turning the gpg+tar into something you won't hate in six months. Then you'll want to learn about GNUtar features that support incremental backups. Then you'll need to write backup/restore scripts. Then, OMG NOOOOOOOOOO.

Go for a bigfile+loop, add rsync, add rsnapshot for backup... and you're still left with a file that you can gzip and save or upload somewhere and you get the bonus of incredible flexibility and backups that go back to the day you started.

I'd offer up my tar based backup/restore scripts but I don't think I have enough legal rights to do so.
posted by zengargoyle at 1:24 PM on June 2, 2018 [1 favorite]


« Older I've been hearing it for a *minute*   |   Foods for Grief Newer »
This thread is closed to new comments.