EXADATA: How to configure ssh for current user on a list of nodes?

APPLIES TO:

Oracle Exadata Storage Server Software – Version 11.2.1.2.0 and later.

GOAL

Customer needs steps to configure ssh on exadata.

SOLUTION

Run the following on a db node, and follow the prompts.
/opt/oracle.SupportTools/onecommand/setssh-Linux.sh -h /opt.oracle.SupportTools/onecommand/all_nodelist_group
This will configure passwordless ssh for current user for the list of nodes specified in all_nodelist_group.

 

If the utility is missing, it can be extracted from a current download of the onecommand utility

see Document : 888828.1 – Exadata Database Machine and Exadata Storage Server Supported Versions

 

/opt.oracle.SupportTools/onecommand/all_nodelist_group —you may need to create a file with the host entries for which you need SSH.

Example;

IBSWITCH_GROUP –Keep the ibswitches in that file and pass in that command.

[root@exa01dbadm01 ~]# cd /opt/oracle.SupportTools/onecommand/
[root@exa01dbadm01 onecommand]# vi IBSWITCH_GROUP
exa01sw-iba01
exa01sw-ibb01

[root@exa01dbadm01 onecommand]#chmod 775  IBSWITCH_GROUP 

[root@exa01dbadm01 onecommand]# ./setssh-Linux.sh -s -p  PassWord -n N -h IBSWITCH_GROUP
[root@exa01dbadm01 onecommand]# cat IBSWITCH_GROUP
exa01sw-iba01
exa01sw-ibb01
[root@exa01dbadm01 onecommand]# ssh exa01sw-iba01
Last login: Sat Oct 14 15:39:56 2017 from exa01dbadm01.omsan.com.tr
You are now logged in to the root shell.
It is recommended to use ILOM shell instead of root shell.
All usage should be restricted to documented commands and documented
config files.
To view the list of documented commands, use “help” at linux prompt.
[root@exa01sw-iba01 ~]#

For different user from root (for example oracle user)

EXADATA passwordless SSH login not working for oracle user

i. Login to the oracle account:
# su – oracle

ii. Create a dcli group file listing the nodes in the Oracle Cluster.

iii. Run the setup ssh script (this assumes the oracle password on all servers in the dbs_group list is set to “welcome””
$./setssh-Linux.sh -s -p welcome1 -n N -h dbs_group

Source:Oracle support note(Doc ID 1923785.1)

 

Advertisements

How can I check the port status of my fibre channel HBA?

Environment

  • Red Hat Enterprise Linux 5
  • Red Hat Enterprise Linux 6

Issue

  • Need to check the port status of my fibre channel HBA

Resolution

  • The state of the port can be checked within the /sys/class directory either via
$ systool -c fc_host -v    from sysfsutils package
Class = "fc_host"

  Class Device = "host10"
  Class Device path = "/sys/class/fc_host/host10"
    fabric_name         = "0x200000e08b8068ae"
    issue_lip           = 
    node_name           = "0x200000e08b8068ae"
    port_id             = "0x000000"
    port_name           = "0x210000e08b8068ae"
    port_state          = "Linkdown"
    port_type           = "Unknown"
    speed               = "unknown"
    supported_classes   = "Class 3"
    supported_speeds    = "1 Gbit, 2 Gbit, 4 Gbit"
    symbolic_name       = "QLE2460 FW:v5.06.03 DVR:v8.03.07.15.05.09-k"
    system_hostname     = ""
    tgtid_bind_type     = "wwpn (World Wide Port Name)"
    uevent              = 

    Device = "host10"
    Device path = "/sys/devices/pci0000:00/0000:00:04.0/0000:08:00.0/host10"
      optrom_ctl          = 
      reset               = 
      uevent              = 
:

or if sysfsutils package is not installed

[root@axx /]# grep -v "zZzZ" -H /sys/class/fc_host/host*/port_state
/sys/class/fc_host/host0/port_state:Linkdown
/sys/class/fc_host/host1/port_state:Linkdown

 

Linux (CUPS) Spooler Troubleshooting

Step 1:Check Error logs

Command:    tail -f /var/log/cups/error_log 

I [08/Mar/2017:16:58:03 +0300] [Job ???] Request file type is text/plain.
 I [08/Mar/2017:16:58:03 +0300] [Job 7] Adding start banner page "none".
 I [08/Mar/2017:16:58:03 +0300] [Job 7] Adding end banner page "none".
 I [08/Mar/2017:16:58:03 +0300] [Job 7] File of type text/plain queued by "emdadm".
 I [08/Mar/2017:16:58:03 +0300] [Job 7] Queued on     
 "TEST_LABEL_PRINTER" by "emdadm".

Linux (CUPS) Spooler Commands

Step 2:To view the status of all print queues:

Command:    lpc status

Example:

serddad1:~ # lpc status
 TEST_LABEL_PRINTER:
 printer is on device 'socket' speed -1
 queuing is enabled
 printing is enabled
 1 entries
 daemon present

PROD_LABEL_PRINTER:
 printer is on device 'socket' speed -1
 queuing is enabled
 printing is enabled
 1 entries
 daemon present

Step 3:To check the status of a single print queue and view a list of pending jobs:

Command:  lpc status printer_name,  lpstat -P printer_Name,

lpstat -pprinter_name

Example:

serddad1:~ # lpc status TEST_LABEL_PRINTER
 TEST_LABEL_PRINTER:
 printer is on device 'socket' speed -1
 queuing is enabled
 printing is enabled
 1 entries
 daemon present
 serddad1:~ # lpstat -P TEST_LABEL_PRINTER
 TEST_LABEL_PRINTER-1 root 17408 Tue Feb 28 14:38:09 2017
 serddad1:~ # lpstat -pTEST_LABEL_PRINTER
 printer TEST_LABEL_PRINTER is idle. enabled since Thu Mar 9 14:30:20 2017

Step 4:To remove a single print job:

Command:  cancel printer_name-id(check output of lpstat -P TEST_LABEL_PRINTER)  ,

Example:

serddad1:~ # cancel TEST_LABEL_PRINTER-1

Step 5:To remove all print jobs in a queue:

Command: cancel -a  printer_name

Example:

serddad1:~ # cancel -a TEST_LABEL_PRINTER

Step 6:To enable  a queue:

Command:cupsenable  printer_name

Example:

serddad1:~ # cupsenable TEST_LABEL_PRINTER

Step 7:To disable  a queue:

Command:cupsdisable  printer_name

Example:

serddad1:~ # cupsdisable TEST_LABEL_PRINTER

Step 8:To enable all queues:

Command:

lpstat -p | grep disabled | awk '{print $2}' | xargs cupsenable

Example:

serddad1:~ #lpstat -p | grep disabled | awk '{print $2}' | xargs cupsenable

Step 9:To print a test job:

Command:

echo test_page | lpr -P printer_name

Example:

serddad1:~ #echo test_page | lpr -P  TEST_LABEL_PRINTER

Step 10:To restart/refresh the cups service:

Command:

service cups restart

ORA-15032, ORA-15040, ORA-15042 Error on Oracle ASM

ERROR:Try to mount the disk group on ASM

SQL> alter diskgroup RECOc1 mount;

alter diskgroup RECOc1 mount

*

ERROR at line 1:

ORA-15032: not all alterations performed

ORA-15040: diskgroup is incomplete

ORA-15042: ASM disk “2” is missing from group number “2”

Check  Alert Log:

Wed Feb 15 15:32:15 2017
NOTE: Disk RECOC1_0000 in mode 0x7f marked for de-assignment
NOTE: Disk RECOC1_0001 in mode 0x7f marked for de-assignment
ERROR: diskgroup RECOC1 was not mounted
ORA-15032: not all alterations performed
ORA-15040: diskgroup is incomplete
ORA-15042: ASM disk “2” is missing from group number “2”

 

SOLUTION:

All transactions must be done by root user.

1-Scan and find RECOC1 disks with scripts;

/etc/init.d/oracleasm querydisk -d `/etc/init.d/oracleasm listdisks -d` | \

cut -f2,10,11 -d” ” | \

perl -pe ‘s/”(.*)”.*\[(.*), *(.*)\]/$1 $2 $3/g;’ | \

while read v_asmdisk v_minor v_major

do

v_device=`ls -la /dev | grep ” $v_minor, *$v_major ” | awk ‘{print $10}’`

echo “ASM disk $v_asmdisk based on /dev/$v_device [$v_minor, $v_major]”

done

2-Remove RECOC* found disks

/etc/init.d/oracleasm deletedisk RECOC1

/etc/init.d/oracleasm deletedisk RECOC2

/etc/init.d/oracleasm deletedisk RECOC3

3-Add re-create RECOC* disks

oracleasm createdisk RECOC1 /dev/sdd1;

oracleasm createdisk RECOC2 /dev/sdh1;

oracleasm createdisk RECOC3 /dev/sdj1;

ora-15032

if flashback on, you have to off it on your oracle database and use the grid user for below processes.

  1. On GUI–>open asmca—>Disk Group Name:RECOC1  select 3  disks to add diskgroup
  2. check your alert log again.

SUCCESS: diskgroup RECOC1 was mounted
Wed Feb 15 15:46:30 2017
SUCCESS: CREATE DISKGROUP RECOC1 EXTERNAL REDUNDANCY DISK ‘/dev/oracleasm/disks/RECOC1’ SIZE 102398M ,
‘/dev/oracleasm/disks/RECOC2’ SIZE 1048570M ,
‘/dev/oracleasm/disks/RECOC3’ SIZE 511993M ATTRIBUTE ‘compatible.asm’=’12.1.0.0.0′,’au_size’=’1M’ /* ASMCA */
Wed Feb 15 15:46:30 2017

 

device ethX does not seem to be present, delaying initialization..

Problem :device ethX does not seem to be present, delaying initialization…

Solution: fix the problem by deleting the /etc/udev/rules.d/70-persistent-net.rules file and restarting server.

Step 1:Restart network daemon.

#Service network restart

Shutting down loopback interface:
Bringing up loopback interface:
Bringing up interface eth0: Device eth0 does not seem to be present, delaying initialization.

Step 2:

#rm -rf /etc/udev/rules.d/70-persistent-net.rules

Step 3:

#reboot

If your server is virtualbox;

Solution:

step 1:copy the mac address from the inside file “/etc/sysconfig/network-scripts/ifcfg-eth0” to the host to virtualbox–>Settings–>network–>Adapter 1–>advanced–>MAc address

step 2:

#service network reload

 

 

How to Create and Setup LUNs using LVM in “FC/iSCSI Target Server” on Suse/RHEL/CentOS/Fedora

Size Mounted on
150G  /
200G  /usr/sap
3.0T  /hana/data
1.0T  /hana/shared
1.0T  /hana/log
2.5T  /hana/backup
200G  /installation

Summary of commands;

lsb_release –a

mkdir  -p  /usr/sap

ll  /sys/class/scsi_host/host*

echo “- – -” > /sys/class/scsi_host/host0/scan

echo “- – -” > /sys/class/scsi_host/host1/scan

multipath –ll

pvcreate  /dev/mapper/360002ac0000000000000005f000198bf

vgcreate vgsdevhana  /dev/mapper/360002ac0000000000000005f000198bf

lvcreate -L +202G -n lv-usrsap vgsdevhana

ls -l /dev/vgsdevhana/*

mkfs.ext3 /dev/vgsdevhana/lv-usrsap

add /etc/fstab  /dev/vgsdevhana/lv-usrsap   /usr/sap             ext3    defaults 1 2

mount –a

df -Th

Step by Steps details of above commands are below;

*******************Check Operating System version**

serddad1:~ # lsb_release -a

LSB Version:    core-2.0-noarch:core-3.2-noarch:core-4.0-noarch:core-2.0-x86_64:core-3.2-x86_64:core-4.0-x86_64:desktop-4.0-amd64:desktop-4.0-noarch:graphics-2.0-amd64:graphics-2.0-noarch:graphics-3.2-amd64:graphics-3.2-noarch:graphics-4.0-amd64:graphics-4.0-noarch

Distributor ID: SUSE LINUX

Description:    SUSE Linux Enterprise Server 11 (x86_64)

Release:        11

Codename:       n/a

serddad1:~ #

 

**************Create Directory********************************

sdevhana01:~ # mkdir  -p /hana/data

sdevhana01:~ # mkdir -p  /hana/shared

sdevhana01:~ # mkdir -p  /hana/log

sdevhana01:~ # mkdir -p   /hana/backup

sdevhana01:~ # mkdir -p   /installation

****Linux Scan for New Scsi Device to Detect New Lun Without Reboot********

serddad1:~ # ll  /sys/class/scsi_host/host*

lrwxrwxrwx 1 root root 0 Dec 20 12:53 /sys/class/scsi_host/host0 -> ../../devices/pci0000:10/0000:10:03.0/0000:13:00.0/host0/scsi_host/host0

lrwxrwxrwx 1 root root 0 Dec 20 12:53 /sys/class/scsi_host/host1 -> ../../devices/pci0000:10/0000:10:03.0/0000:13:00.1/host1/scsi_host/host1

serddad1:~ # echo “- – -” > /sys/class/scsi_host/host0/scan

serddad1:~ # echo “- – -” > /sys/class/scsi_host/host1/scan

*************Check LUN*************************************

sdevhana01:~ # multipath -ll

 

360002ac0000000000000005f000198bf dm-5 3PARdata,VV

size=8.0T features=’0′ hwhandler=’0′ wp=rw

`-+- policy=’service-time 0′ prio=1 status=active

|- 1:0:0:1 sdc 8:32 active ready running

`- 1:0:1:1 sdd 8:48 active ready running

*************Create physical volume and check *********************

sdevhana01:~ # pvcreate /dev/mapper/360002ac0000000000000005f000198bf

Physical volume “/dev/mapper/360002ac0000000000000005f000198bf” successfully created

sdevhana01:~ # pvs

PV                                                  VG     Fmt  Attr PSize   PFree

/dev/mapper/360002ac0000000000000005e000198bf_part2 system lvm2 a–  269.83g 17.83g

/dev/mapper/360002ac0000000000000005f000198bf              lvm2 a–    8.00t  8.00t

**************Create Volume Group and check******************************

sdevha01 # vgcreate vgsdevhana  /dev/mapper/360002ac0000000000000005f000198bf

Volume group “vgsdevhana” successfully created

sdevhana01:~ #   vgs

VG         #PV #LV #SN Attr   VSize   VFree

system       1   2   0 wz–n- 269.83g 17.83g

vgsdevhana   1   0   0 wz–n-   8.00t  8.00t

sdevhana01:~ #

**************Create Logical Volume and check*********************

sdevhana01:~ # lvcreate -L +202G -n lv-usrsap vgsdevhana

Logical volume “lv-usrsap” created

sdevhana01:~ # lvs

LV        VG         Attr      LSize   Pool Origin Data%  Move Log Copy%  Convert

root      system     -wi-ao— 200.00g

swap      system     -wi-ao—  52.00g

lv-usrsap vgsdevhana -wi-a—- 202.00g

sdevhana01:~ # lvcreate -L +3T -n lv-data vgsdevhana

Logical volume “lv-data” created

sdevhana01:~ # lvs

LV        VG         Attr      LSize   Pool Origin Data%  Move Log Copy%  Convert

root      system     -wi-ao— 200.00g

swap      system     -wi-ao—  52.00g

lv-data   vgsdevhana -wi-a—-   3.00t

lv-usrsap vgsdevhana -wi-a—- 202.00g

sdevhana01:~ # lvcreate -L +1T -n lv-shared vgsdevhana

Logical volume “lv-shared” created

sdevhana01:~ # lvs

LV        VG         Attr      LSize   Pool Origin Data%  Move Log Copy%  Convert

root      system     -wi-ao— 200.00g

swap      system     -wi-ao—  52.00g

lv-data   vgsdevhana -wi-a—-   3.00t

lv-shared vgsdevhana -wi-a—-   1.00t

lv-usrsap vgsdevhana -wi-a—- 202.00g

sdevhana01:~ # lvcreate -L +2,5T -n lv-backup vgsdevhana

Invalid argument for –size: +2,5T

Error during parsing of command line.

sdevhana01:~ # lvcreate -L +2.5T -n lv-backup vgsdevhana

Logical volume “lv-backup” created

sdevhana01:~ # lvs

LV        VG         Attr      LSize   Pool Origin Data%  Move Log Copy%  Convert

root      system     -wi-ao— 200.00g

swap      system     -wi-ao—  52.00g

lv-backup vgsdevhana -wi-a—-   2.50t

lv-data   vgsdevhana -wi-a—-   3.00t

lv-shared vgsdevhana -wi-a—-   1.00t

lv-usrsap vgsdevhana -wi-a—- 202.00g

sdevhana01:~ # lvcreate -L +2.5T -n lv-setup vgsdevhana

Volume group “vgsdevhana” has insufficient free space (341500 extents): 655360 required.

sdevhana01:~ # lvcreate -L +201G -n lv-setup vgsdevhana

Logical volume “lv-setup” created

sdevhana01:~ # lvs

LV        VG         Attr      LSize   Pool Origin Data%  Move Log Copy%  Convert

root      system     -wi-ao— 200.00g

swap      system     -wi-ao—  52.00g

lv-backup vgsdevhana -wi-a—-   2.50t

lv-data   vgsdevhana -wi-a—-   3.00t

lv-setup  vgsdevhana -wi-a—- 201.00g

lv-shared vgsdevhana -wi-a—-   1.00t

lv-usrsap vgsdevhana -wi-a—- 202.00g

sdevhana01:~ # lvcreate -L +1T -n lv-log vgsdevhana

Logical volume “lv-log” created

 

************************** format with ext3 *******************

/You can possible to format with ext4 or xfs/

sdevhana01:~ # ls -l /dev/vgsdevhana/*

lrwxrwxrwx 1 root root 7 Dec 20 10:42 /dev/vgsdevhana/lv-backup -> ../dm-9

lrwxrwxrwx 1 root root 7 Dec 20 10:39 /dev/vgsdevhana/lv-data -> ../dm-7

lrwxrwxrwx 1 root root 8 Dec 20 10:44 /dev/vgsdevhana/lv-log -> ../dm-11

lrwxrwxrwx 1 root root 8 Dec 20 10:43 /dev/vgsdevhana/lv-setup -> ../dm-10

lrwxrwxrwx 1 root root 7 Dec 20 10:42 /dev/vgsdevhana/lv-shared -> ../dm-8

lrwxrwxrwx 1 root root 7 Dec 20 10:38 /dev/vgsdevhana/lv-usrsap -> ../dm-6

sdevhana01:~ # mkfs.ext3 /dev/vgsdevhana/lv-backup

mke2fs 1.41.9 (22-Aug-2009)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

167772160 inodes, 671088640 blocks

33554432 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

20480 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,

102400000, 214990848, 512000000, 550731776, 644972544

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 29 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

sdevhana01:~ # mkfs.ext3 /dev/vgsdevhana/lv-data

mke2fs 1.41.9 (22-Aug-2009)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

201326592 inodes, 805306368 blocks

40265318 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

24576 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,

102400000, 214990848, 512000000, 550731776, 644972544

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 28 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

sdevhana01:~ # mkfs.ext3 /dev/vgsdevhana/lv-log

mke2fs 1.41.9 (22-Aug-2009)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

67108864 inodes, 268435456 blocks

13421772 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

8192 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,

102400000, 214990848

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 31 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

sdevhana01:~ # mkfs.ext3 /dev/vgsdevhana/lv-setup

mke2fs 1.41.9 (22-Aug-2009)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

13172736 inodes, 52690944 blocks

2634547 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

1608 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 28 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

sdevhana01:~ # mkfs.ext3 /dev/vgsdevhana/lv-shared

mke2fs 1.41.9 (22-Aug-2009)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

67108864 inodes, 268435456 blocks

13421772 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

8192 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,

102400000, 214990848

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 37 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

sdevhana01:~ # mkfs.ext3 /dev/vgsdevhana/lv-usrsap

mke2fs 1.41.9 (22-Aug-2009)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

13238272 inodes, 52953088 blocks

2647654 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

1616 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 22 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

sdevhana01:~ #

************************Add /etc/fstab and mount it************************

 

sdevhana01:~ # vim /etc/fstab

 

/dev/system/swap     swap                 swap       defaults              0 0

/dev/system/root     /                    ext3       acl,user_xattr        1 1

/dev/disk/by-id/scsi-360002ac0000000000000005e000198bf-part1 /boot/efi            vfat       umask=0002,utf8=true  0 0

proc                 /proc                proc       defaults              0 0

sysfs                /sys                 sysfs      noauto                0 0

debugfs              /sys/kernel/debug    debugfs    noauto                0 0

usbfs                /proc/bus/usb        usbfs      noauto                0 0

devpts               /dev/pts             devpts     mode=0620,gid=5       0 0

/dev/vgsdevhana/lv-usrsap     /usr/sap                    ext3    defaults 1 2

/dev/vgsdevhana/lv-data         /hana/data               ext3    defaults 1 2

/dev/vgsdevhana/lv-shared     /hana/shared          ext3    defaults 1 2

/dev/vgsdevhana/lv-log            /hana/log                  ext3    defaults 1 2

/dev/vgsdevhana/lv-backup     /hana/backup        ext3    defaults 1 2

/dev/vgsdevhana/lv-setup        /installation           ext3    defaults 1 2 

~

 

“/etc/fstab” 14L, 1037C written

sdevhana01:~ # mount -a

sdevhana01:~ # df -Th

Filesystem                                          Type   Size  Used Avail Use% Mounted on

/dev/mapper/system-root                             ext3   197G  3.3G  184G   2% /

udev                                                tmpfs  505G  164K  505G   1% /dev

tmpfs                                               tmpfs  505G   84K  505G   1% /dev/shm

/dev/mapper/360002ac0000000000000005e000198bf_part1 vfat   157M   14M  144M   9% /boot/efi

/dev/mapper/vgsdevhana-lv–usrsap                   ext3   199G  188M  189G   1% /usr/sap

/dev/mapper/vgsdevhana-lv–data                     ext3   3.0T  200M  2.9T   1% /hana/data

/dev/mapper/vgsdevhana-lv–shared                   ext3  1008G  200M  957G   1% /hana/shared

/dev/mapper/vgsdevhana-lv–log                      ext3  1008G  200M  957G   1% /hana/log

/dev/mapper/vgsdevhana-lv–backup                   ext3   2.5T  203M  2.4T   1% /hana/backup

/dev/mapper/vgsdevhana-lv–setup                    ext3   198G  188M  188G   1% /installation

sdevhana01:~ #

**************************

 

 

 

 

 

How to extend a LVM swap volume?

In this case, increase by 20G:

 

Summary of commands;

cat /etc/fstab | grep swap

swapoff /dev/system/swap

lvextend -L +20G /dev/system/swap

mkswap /dev/system/swap

swapon /dev/system/swap

free -g

 

Details of above commands are below;

sdevhana01:~ # free -g

total       used       free     shared    buffers     cached

Mem:          1009         10        999          0          0          0

-/+ buffers/cache:          9        999

Swap:           31          0         31

sdevhana01:~ # cat /etc/fstab | grep swap

/dev/system/swap     swap                 swap       defaults              0 0

sdevhana01:~ # swapoff /dev/system/swap

sdevhana01:~ # free -g

total       used       free     shared    buffers     cached

Mem:          1009         10        999          0          0          0

-/+ buffers/cache:          9        999

Swap:            0          0          0

sdevhana01:~ # vgs

VG     #PV #LV #SN Attr   VSize   VFree

  system   1   2   0 wz–n- 269.83g 35.83g

sdevhana01:~ # lvextend -L +20G /dev/system/swap

Extending logical volume swap to 52.00 GiB

Logical volume swap successfully resized

sdevhana01:~ # mkswap /dev/system/swap

mkswap: /dev/system/swap: warning: don’t erase bootbits sectors

on whole disk. Use -f to force.

Setting up swapspace version 1, size = 54525948 KiB

no label, UUID=14a35aeb-6e1a-4848-a7ca-eb315e905d31

sdevhana01:~ # swapon /dev/system/swap

sdevhana01:~ # free -g

total       used       free     shared    buffers     cached

Mem:          1009         10        999          0          0          0

-/+ buffers/cache:          9        999

Swap:           51          0         51

sdevhana01:~ #