In an iSCSI context, what do LUNs mean?
March 23, 2011 12:49 AM   Subscribe

Question about iSCSI target, a single disk with a single partition, and LUNs.

I'm using iet on CentOS 5.5 as my iSCSI target, and whatever software CentOS packages as iscsi-initiator-utils on the initiator side of things.

My question is: I have a disk (sdb) with a single partition (sdb1). I want to make sdb1 available on other machines via iSCSI. When adding the target to ietd.conf, how do I assign the LUNs?

1) Simple Way: have LUN 0 be /dev/sdb1 with no other LUNs (what I'm currently doing)

2) Overthinking Way: have LUN 0 be /dev/sdb and LUN 1 be /dev/sdb1

Suppose in the future I add a second partition related to partition 1 that has to be exported too. Do I add another LUN (1 or 2 respectively) or do I configure another target entirely dedicated to /dev/sdb2?

I guess I'm confused over what LUNs are. Are they a way to present the same disk in different configurations? Or are they a way to present multiple disks in the same target?

Bonus: I know choice (1) appears on the initiator side as just /dev/sdb and works. But if I do (2) how does it appear? As /dev/sdb and /dev/sdb1 or as /dev/sdb and /dev/sdc?

Extra Information: /dev/sdb1 is a PV in a clvm setup that has a clustered VG. The LV is formatted with GFS2. That's why multiple machines need to have simultaneous access to the same "physical" disk. The "physical" disk will be a VMware disk; I'd love to attach the VMware disk simultaneously to multiple VMs but my ESX guys tell me this prevents vMotion.
posted by sbutler to Computers & Internet (4 answers total)
 
Best answer: Separate LUNs are separate logical devices. They're a way of carving a single SCSI address (when such things were a limited resource) up to get more devices on the bus, not a way of carving up a single device (partitioning).

Think of a SCSI-to-whatever bridge. The bridge takes a single SCSI address and presents each of the devices on the far-side bus as separated LUNs.
posted by polyglot at 5:21 AM on March 23, 2011


Best answer: Jumping off of polyglot's comment, picture a scsi bus. It is a wire that you can attach multiple devices to, and each device has a SCSI ID. You can have 16 devices on it, two of which are the controller. So you can plug in one drive cage with 14 drives and you are at max. The cage in this scenario is dumb and is just a fancy SCSI cable.

Or, using LUNs, you can plug in 14 smart drive cages, each with a bunch of drives. Each *cage* is a SCSI device, and each LUN is a drive inside that cage.
posted by gjc at 6:48 AM on March 23, 2011


Best answer: 2) Overthinking Way: have LUN 0 be /dev/sdb and LUN 1 be /dev/sdb1

This won't work, because /dev/sdb contains /dev/sdb1. You could have a situation where something accesses sdb1 through the connection to sdb, and some other thing access sdb1 through the sdb1 connection.

(Or, more likely, it will "work" in that it will connect up, but all hell will break loose when the sdb1 filesystem starts getting accessed by two different processes. If GFS2 can handle this, then you shouldn't have this trouble.)

Remember, iSCSI is just a virtual SCSI cable. I don't know the protocol, but it might be impossible to use iSCSI to share one block device to multiple machines, since I don't think the SCSI protocol knows how to do this.

But the way to handle this, to my way of thinking, is to keep everything at the same "level". Either share the whole physical disk as one target, and let the client(s) mount one partition or another, or share the partitions as separate targets.

(If the OS will allow this- I know windows iSCSI doesn't like seeing just partitions as targets, it likes to think it is talking to a physical disk with a partition table.)

If "/dev/sdb" is really a disk image, it seems like a PITA to partition it. Just create different disk images.
posted by gjc at 6:53 AM on March 23, 2011


Response by poster: I don't know the protocol, but it might be impossible to use iSCSI to share one block device to multiple machines, since I don't think the SCSI protocol knows how to do this.

Seems to work just fine. The iSCSI target lets multiple initiators connect just fine, and I am certainly running with multiple machines accessing the same LV simultaneously. GFS2 takes care of making sure that the different nodes don't step on eachothers' toes.

But the way to handle this, to my way of thinking, is to keep everything at the same "level". Either share the whole physical disk as one target, and let the client(s) mount one partition or another, or share the partitions as separate targets.

My first attempt was to use /dev/sdb as the taget, and indeed, on the initiators they saw a /dev/sdb and /dev/sdb1. But every operation with LVM generated an "Input/Output error". For whatever reason, the iscsi-initiator machines couldn't write properly to /dev/sdb1 if it was shared as /dev/sdb.

If "/dev/sdb" is really a disk image, it seems like a PITA to partition it. Just create different disk images.

You know, originally this is what I wanted to do. But the interface in system-config-lvm wouldn't let me initiate the disk as a whole. But I just ran pvcreate manually and it worked. Thanks!
posted by sbutler at 1:21 PM on March 23, 2011


« Older Do brothers have more brothers?   |   Scrabble TRICKSTER? No! Never! Newer »
This thread is closed to new comments.