Monday, June 25, 2012

How to Add SAN or local volume in VIO Server and LPAR

The fact is, i'm still learning how to administer a IBM box.

The following procedure i am trying to record is very similar be it adding a partition that is SAN LUN or local disk. If in doubt, check using smitty.

The requirement for this is that all the disk or LUNs are presented to the VIO server before using VIO server to assign them as volumes to the LPARs.

If you have 2 VIO servers, which is usually the case for redundancy purpose, do this on both VIO server A good practice would be to mirror them at the LPAR level.

1) Log in to the VIO server. Verify if the disk or LUNs is visible to the VIO server

'Refresh' the device tree.
$ cfgdev

List out the PV. Those that are labeled as "None" are ones not assigned to any use yet.
$ lspv 

You can also double check using the "-free" option to list those that are not in use yet. This is only available in VIO in the restricted shell only.
$ lspv -free

2) Create VG in VIO server

Once the disk or LUNs is visible, go on to create VG. Notice that the PVID is randomly created for the disk or LUNs to uniquely identify them in the ODM.
$ mkvg -vg dbvg_clients hdisk8
dbvg_clients
0516-1254 mkvg: Changing the PVID in the ODM.

3) Next, Create LV in VIO server

Here we are creating the LV with "db_vg" LV name to the "dbvg_clients" VG name with 20Gb space from hdisk8.
$ mklv -lv db_vg dbvg_clients 20000M hdisk8
db_vg

4) Now, Assign the VG to the LPAR.

Create a virtual adapter in HMC to map to the LPAR, then reload in VIOS and assign the VG to the LPAR.
$ mkvdev -vdev db_vg -vadapter vhost5
vtscsi0 Available
 



For easy management you may want to separate the different VG that is assigned to the LPAR via the vhostX. i.e. rootvg assigned to vhost4 (myserver) and datavg assigned to vhost5 (myserver too). Also if there are multiple SAN devices you want to connect to, recommendation to use a different vhost so that in the event that you need to remove access to that SAN device, the virtual resources can be removed easily without downtime.
vhost is the "Virtual SCSI Server Adapter" that maps to the LPAR. To know which LPAR is mapped to which vhost, check the HMC.
In HMC, Select the LPAR > Hardware Information > Virtual I/O adapters > SCSI
Look for your host(s) in "remote partition" and note their 'vhost' number another way is to check from the VIO server.
$ lsmap -all | grep vhost | grep 0005
vhost4          U8233.E8B.1003D8P-V1-C40                     0x00000005
vhost5          U8233.E8B.1003D8P-V1-C41                     0x00000005
vhost12         U8233.E8B.1003D8P-V1-C42                     0x00000005

5) Lets login to the LPAR "vhost5" aka myserver.

Refresh the device tree.
# cfgmgr

The disk should now appear. Here, we see hdisk6
root@myserver:> lspv
hdisk0          00f603d83df2e2f6                    rootvg          active
hdisk1          00f603d843c55fa7                    rootvg          active
hdisk2          00f603d852925bfd                    oravg           active
hdisk3          00f603d891cf7b8a                    oravg2          active
hdisk4          00f603d852925c71                    oravg           active
hdisk5          00f603d891cf7c09                    oravg2          active
hdisk6          none                                None

6) Create VG in LPAR

We will create a VG using hdisk6. Noticed that the PVID is generated for the PV.
root@myserver:/> mkvg -y oravg3 hdisk6
0516-1254 mkvg: Changing the PVID in the ODM.
oravg3

7) Next, create LV in LPAR

The command below creates the LV named "oratmplv" belonging to oravg3 VG with 750 LP from hdisk6. As each LP is 256Mb, total LV size is 192Gb.
# /usr/sbin/mklv -y oratmplv -t jfs2 oravg3 750 hdisk6

8) Create journal log in LPAR if required.

Journal log partition is the log device used to provide the partition protection. It is used by OS to recover the partition when required.
This LV oratemplv has 3 LP from hdisk6.
# /usr/sbin/mklv -y oradjlogv3  -t jfs2log oravg3 3 hdisk6
General guide is 2Mb for every 1Gb or 1 partition per 512 partition as a baseline.
In some cases where the filesystem activities are too heavy or too frequent for the log device, you might wrap errors like the below in errpt
LABEL: J2_LOG_WAIT IDENTIFIER: CF71B5B3
LABEL: J2_LOG_WRAP IDENTIFIER: 061675CF

Increase the size of the journal log as follows
# extendlv  

Check and unmount the log device
# mount
# unmount /dev/

Format and mount back the log device
# logform /dev/
#mount 

My recommendation is to quieten the LV data partition before working on the LV journal log device.

9) Format the log device

We will use the log device as it is and not use it as "inline" for data LV. (Not required if you don't need to specify a separate log device)
root@myserver:/> logform  /dev/oradjloglv3
logform: destroy /dev/roradjloglv3 (y)?y

10) Create file-system with mount-point.

create mount-point.
# mkdir /opt/oracle11

We create the file system to mount to /opt/oracle11.

If you are using "smitty storage" to create file system, do the following
  • put in the log device in "Logical Volume for Log"
  • leave the "Inline Log size" blank.
Alternatively, use crfs with "-a logname='oradjloglv3'" or mkfs.
# mkfs -o log=/dev/oradjloglv3 -V jfs2 /dev/oratmplv

11) Mount the file system and ready for use.

Confirm file-system integrity
# fsck -p /dev/oratemplv

Then finally create mountpoint and mount the filesystem
# chown -R oracle:dba /opt/oracle11
# mount -o log=/dev/oradjloglv3 /dev/oratmplv /opt/oracle11
 


Remember to add into /etc/filesystem if you want to mount it after every restart.
If you used smitty to mount the filesystem, choose "Mount AUTOMATICALLY at system restart?" so that you don't need to meddle with the /etc/filesystem file.

No comments: