Monday, June 25, 2012

Setting up Fibre Channel storage on SLES servers

So I recently had to do a storage setup at work.  Based on some components mentioned above.  There wasn’t really a single end-to-end document online that described the steps, so I figure I might as well write one.  This is likely a super rudimentary delineation of the storage possibilities on Linux, so please regard it as such.

Pre-requisites


DISCLAIMER: So obviously, I am not to be held responsible for any resulting damage that comes from following these instructions.  Your system is your own, and it is assumed that you’re smart enough to not permanently damage your system.  Thus, by proceeding, you consent to relinquishing any blame or responsibility on the blog author (me) for any damage that would result from reading further.

I’m also going to assume that the physical hookups are complete.  That is, server and storage is properly cabled, properly powered, OS installed, necessary device drivers (network adapters, I/O adapters, etc.) are installed and configured (e.g. the systems are on the same internal network and can ping each other).

So that we have a concrete example to work with, let’s assume that this is our setup:

Assumes rest of configuration is complete. e.g. network & management connections are established

Setup Steps


a. SAN Switch Layer – Storage Zoning


There’s a web blog that did an awesome job at describing the steps to zone a Brocade switch, which applies to IBM SAN40B FC switches.  So there’s no point for me to rehash what is mentioned there.  What I’m going to write about, however, is the more concrete steps I used to set up my zones.

One augmentation worthy to note is if you don’t have access to the physical I/O ports on the adapters or storage subsystem, the WWIDs listed in the switchshow output will only help you to a certain point.  That is, although switchshow will identify which FC switch port is connected to which WWID, it won’t tell you the mapping between the WWID and the FC adapter or storage subsystem port.  Go to the Appendix to find out how to determine the WWID mapping to FC adapter/subsystem port.

Once you determine those WWIDs, you can proceed with zoning according to Matt’s blog.  For my basic zone, I did the following:
  1. Removed existing zone artifacts that were no longer in use, used: zonedelete, alidelete
  2. Created aliases for the server and storage subsystem FC ports, alicreate
  3. Created zones, zonecreate
  4. Added zones to the configuration, cfgadd
  5. Enabled the new configuration (i.e. apply our changes to the config), cfgenable
  6. Repeated for 2nd SAN switch

b. Storage Subsystem layer – LUN setup/config


So now that your storage zoning is done, you can proceed to have your storage configured for your servers.  i.e. create the array & logical drive.  I’m somewhat cheating here in that I’m going to be using “IBM Storage System DS Storage Manager Client” tool to set this up for the IBM Storage Systems hardware.  No command line here.

After getting your SM client tool to discover the storage subsystem through the management network, you can now assign host(s) to said storage.  As the following screenshot implies, from the “Mappings” tab, you can either assign a group of hosts (servers), or just a single one.

After getting into the “Mappings” tab, right-click the Storage Subsystem entry to find the option to define your hosts.  Obviously, you’ll want to add the hosts in here that were in your storage zone


Once the host(s) are/is defined, you’ll want to create the logical drive/array to feed to the host(s) that you’ve just defined.  This is done from the “Logical” tab.

After getting into the “Logical” tab, right-click the unconfigured capacity where you’ll get the chance to create the array or drive.

c. OS layer – Multipathing & Filesystem creation


Now that the switch zones are configured, the storage subsystems are mapped, it’s time to configure the storage on the server layer.  I’ll be cheating here as well, as I won’t go into any details on the FC adapter driver installation, or multipath setup, since it was already done for me, :)  However, I’d say that the SLES Storage Administration Guide is a decent place to start things off, specifically the multipath section.  On RHEL, you have the DM Multipath Config & Admin Guide.

Once that’s set up, it’s as easy as getting into SLES’s yast and rediscover the storage devices (open up the Partioner).  Or easier (and more reliable) yet, a reboot of the system should suffice in refreshing the available storage that’s been created.

Once the devices are available, you should be able to see them through the multipath -l command:

 parlermo02:/proc # multipath -l  
 <strong>mpathi</strong> (360080e50001ba47a000009414fb26478) <strong>dm-3</strong> IBM,1746&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; FAStT  
 size=1.1T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw  
 |-+- policy='round-robin 0' prio=-1 status=active  
 | |- 4:0:0:3 sdj 8:144 active undef running  
 | `- 6:0:0:3 sdr 65:16 active undef running  
 `-+- policy='round-robin 0' prio=-1 status=enabled  
 |- 3:0:0:3 sdf 8:80&nbsp; active undef running  
 `- 5:0:0:3 sdn 8:208 active undef running  
 <strong>mpathh</strong> (360080e50001ba47a0000093e4fb2641c) <strong>dm-1</strong> IBM,1746&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; FAStT  
 size=1.1T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw  
 |-+- policy='round-robin 0' prio=-1 status=active  
 | |- 4:0:0:1 sdh 8:112 active undef running  
 | `- 6:0:0:1 sdp 8:240 active undef running  
 `-+- policy='round-robin 0' prio=-1 status=enabled  
 |- 3:0:0:1 sdd 8:48&nbsp; active undef running  
 `- 5:0:0:1 sdl 8:176 active undef running  

You should be able to use either the mpathh or dm-1 identifiers in your filesystem creation tools.  I would lean towards the dm-1 more than the mpathh, however.

After creating the filesystem (which I recommend using the Partitioner for), a few handy commandies that I’ve found useful (based on the way I needed my ext3 filesystem created) were: lvdisplay, vgdisplay, pvdisplay

And there you go.  From raw disk hardware to accessible filesystem.  You can then go and do whatever tests/verifications you would do to check that the filesystem is working as expected.  I tend to look at /etc/fstab, check df -m, mount and run FIO (while using iostat or nmon) to make sure my filesystems are working as expected.

Appendix


WWID (World Wide Identifier) on I/O adapters


If you go into /sys/class/fc_host, the WWIDs can be found in the host*/port_name files.  As a bonus, you can find a bit more information about the ports for the other port_* files.

 parlermo04:/sys/class/fc_host # awk 'FNR==1{print "\n" FILENAME}1' */port_name (<- THIS COMMAND IS CASE SENSITIVE)  
   
 host3/port_name  
 0x21000024ff2b73e8  
   
 host4/port_name  
 0x21000024ff2b73e9  
   
 host5/port_name  
 0x21000024ff2adb80  
   
 host6/port_name  
 0x21000024ff2adb81  
   
 parlermo04:/sys/class/fc_host # ls -l  
 total 0  
 lrwxrwxrwx 1 root root 0 May 30 16:29 host3 -> ../../devices/pci0000:00/0000:00:03.0/0000:09:00.0/host3/fc_host/host3  
 lrwxrwxrwx 1 root root 0 May 30 16:29 host4 -> ../../devices/pci0000:00/0000:00:03.0/0000:09:00.1/host4/fc_host/host4  
 lrwxrwxrwx 1 root root 0 May 30 16:29 host5 -> ../../devices/pci0000:00/0000:00:09.0/0000:18:00.0/host5/fc_host/host5  
 lrwxrwxrwx 1 root root 0 May 30 16:29 host6 -> ../../devices/pci0000:00/0000:00:09.0/0000:18:00.1/host6/fc_host/host6  
   
 parlermo04:~ # dmidecode -t 9 -u  
 # dmidecode 2.9  
 SMBIOS 2.7 present.  
   
 Handle 0x0044, DMI type 9, 17 bytes  
     Header and Data:  
         09 11 44 00 01 AB 0B 03 04 01 00 04 00 00 00 09  
         00  
     Strings:  
         4E 6F 64 65 20 31 20 50 43 49 2D 45 78 70 72 65  
         73 73 20 53 6C 6F 74 20 31 00  
         "Node 1 PCI-Express Slot 1"  
   
 Handle 0x0048, DMI type 9, 17 bytes  
     Header and Data:  
         09 11 48 00 01 AB 0B 03 03 05 00 04 00 00 00 18  
         00  
     Strings:  
         4E 6F 64 65 20 31 20 50 43 49 2D 45 78 70 72 65  
         73 73 20 53 6C 6F 74 20 35 00  
         "Node 1 PCI-Express Slot 5"  

Again, with the assumption that you don’t have physical access to the machine, the dmidecode command provides you that mapping.  lspci also gives you a human-readable description of the devices, but dmidecode is what you’ll want to run to get a definitive mapping.  I’ve snipped out the irrelevant slot outputs, so that it’s easier to focus on where you need to look to find that proper mapping (see the bolded numbers from the dmidecode and ls outputs).

So finally, in this example, we’re able to deduce that WWIDs
0x21000024ff2b73e8 and 0x21000024ff2b73e9 are for ports 0 & 1, respectively, on the adapter on PCI slot 1, and that 0x21000024ff2adb80 and 0x21000024ff2adb81 are for ports 0 & 1 on the adapter on PCI slot 5.

WWID on DS3524 storage subsystems


So I’m luckily spoiled in that operations related to the storage subsystem, since it’s an IBM product, can be done through its Storage Manager Client software. In the case of the WWIDs for the ports on the storage controllers, you have to view the subsystem profile. To do that, you go into the subsystem’s window, then into the logical tab, then select your subsystem, then click on the link to view the subsystem profile.

Click on pic to see full size

WWID on FC switch ports


So far, I haven’t found the need to determine the WWID for the FC switch port.  What’ve I’ve used up to this point is switchshow to find out which adapter or subsystem port an FC port is connected to.  But as you can tell with the following output, it’s difficult to know which port is cabled to which FC adapter/controller port.

 Index Port Address Media Speed State   Proto  
 ==============================================  
 0  0  020f00  id  N8  Online   FC F-Port 21:00:00:24:ff:22:08:38  
 1  1  020d00  id  N8  Online   FC F-Port 21:00:00:24:ff:22:08:54  
 2  2  020b00  id  N8  Online   FC F-Port 21:00:00:24:ff:22:09:ca  
 3  3  020900  id  N8  Online   FC F-Port 21:00:00:24:ff:22:07:e0  
 4  4  020e00  id  N8  No_Light  FC  
 5  5  020c00  id  N8  No_Light  FC  

No comments:

Post a Comment