-bash: /bin/rm: Argument list too long wile removing files

Problem:

I have a directoy in linux that has several hundred thousand files, and is about 8.2 Gb. I attempted to clear out the directory using ‘rm -rf *aud’, and got the following error:

Oracle audit files:

[oracle@drexa01dbadm01 rdbms]$ du -sh audit
8.2G audit
[oracle@drexa01dbadm01 rdbms]$ cd audit/
[oracle@drexa01dbadm01 audit]$pwd
/u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/audit
[oracle@drexa01dbadm01 audit]$ rm -rf *aud
-bash: /bin/rm: Argument list too long
[oracle@drexa01dbadm01 audit]$

Solution:

[root@drexa01dbadm01 audit]# find . -name '*aud' | xargs rm -v
removed `./opum_ora_5570_20170419113001272611143795.aud'
removed `./opum_ora_19087_20170825213720316402143795.aud'
removed `./opum_ora_24477_20170215033002283227143795.aud'
removed `./opum_ora_7691_20160815080001986930143795.aud'
removed `./opum_ora_16653_20160907043002231684143795.aud'
removed `./opum_ora_13406_20170127234157760483143795.aud'
removed `./opum_ora_20966_20160830140953226189143795.aud'
removed `./opum_ora_32328_20160302052636008652143795.aud'
removed `./opum_ora_8918_20170924193849158242143795.aud'
removed `./opum_ora_3496_20160915031853572953143795.aud'
removed `./opum_ora_6372_20161020181850094006143795.aud'
removed `./opum_ora_14855_20161014233431338219143795.aud'
removed `./opum_ora_3543_20160320162656730084143795.aud'
removed `./opum_ora_507_20171004204104885761143795.aud'
removed `./opum_ora_28535_20160510060437060235143795.aud'
removed `./opum_ora_12919_20170501052812370394143795.aud'
removed `./opum_ora_19556_20161209185701631815143795.aud'
removed `./opum_ora_4558_20170821004123546569143795.aud'
removed `./opum_ora_12780_20170113235941423540143795.aud'
removed `./opum_ora_7448_20160602072824080487143795.aud'
removed `./opum_ora_25740_20160704060001516869143795.aud'
removed `./opum_ora_18222_20170211181055180098143795.aud'
removed `./opum_ora_14988_20160619075753542821143795.aud'
removed `./opum_ora_18060_20170510231755910407143795.aud'
removed `./opum_ora_4974_20160904032946872225143795.aud'
removed `./opum_ora_22492_20160810164246080476143795.aud'
removed `./opum_ora_30220_20160730184335973711143795.aud'
removed `./opum_ora_9281_20160601050646550905143795.aud'
removed `./opum_ora_23802_20161007002846100802143795.aud'
removed `./opum_ora_31124_20170722151749240758143795.aud'
removed `./opum_ora_19994_20160603044311483365143795.aud'
removed `./opum_ora_12052_20160228214028001728143795.aud'
removed `./opum_ora_15291_20170914184724665451143795.aud'
removed `./opum_ora_22103_20160603225229269785143795.aud'
removed `./opum_ora_16853_20170731031019341399143795.aud'
removed `./opum_ora_17237_20170418113002372693143795.aud'
removed `./opum_ora_6563_20161224000002747653143795.aud'
removed `./opum_ora_25523_20170623172237509923143795.aud'
removed `./opum_ora_31895_20160806180001664608143795.aud'

 

 

 

Advertisements

Permissions on the password database may be too restrictive. su: incorrect password Problem

Environment

  • SLES for SAP Applications 11.4 (x86_64)
  • SUSE Linux Enterprise Server 11 (x86_64)

Issue

  • Permissions on the password database may be too restrictive.
    su: incorrect password

Resolution

  • Re-create the oragrid user.

agrcpgd1:oraegp 54> su – oragrid
Password:
Permissions on the password database may be too restrictive.
su: incorrect password

  • check user id and group id 

agrcpgd1:~ # cat /etc/passwd| grep oragrid
oragrid:x:507:503:Grid Admin:/home/oragrid:/bin/bash
agrcpgd1:~ # id oragrid
uid=507(oragrid) gid=503(dba) groups=503(dba)

  • Remove oragrid user without -r option.(-r option:To remove the user’s home directory and mail spool )

agrcpgd1:~ # userdel oragrid
no crontab for oragrid

  • Add user with userid and groupid.

agrcpgd1:~ # useradd -u 507 -g 503 -c “Grid Admin” -d /home/oragrid oragrid
agrcpgd1:~ # passwd oragrid
Changing password for oragrid.
New Password:
Reenter New Password:
Password changed.

agrcpgd1:oraegp 51> su – oragrid
Password:
oragrid@agrcpgd1:~>

How can I check the port status of my fibre channel HBA?

Environment

  • Red Hat Enterprise Linux 5
  • Red Hat Enterprise Linux 6

Issue

  • Need to check the port status of my fibre channel HBA

Resolution

  • The state of the port can be checked within the /sys/class directory either via
$ systool -c fc_host -v    from sysfsutils package
Class = "fc_host"

  Class Device = "host10"
  Class Device path = "/sys/class/fc_host/host10"
    fabric_name         = "0x200000e08b8068ae"
    issue_lip           = 
    node_name           = "0x200000e08b8068ae"
    port_id             = "0x000000"
    port_name           = "0x210000e08b8068ae"
    port_state          = "Linkdown"
    port_type           = "Unknown"
    speed               = "unknown"
    supported_classes   = "Class 3"
    supported_speeds    = "1 Gbit, 2 Gbit, 4 Gbit"
    symbolic_name       = "QLE2460 FW:v5.06.03 DVR:v8.03.07.15.05.09-k"
    system_hostname     = ""
    tgtid_bind_type     = "wwpn (World Wide Port Name)"
    uevent              = 

    Device = "host10"
    Device path = "/sys/devices/pci0000:00/0000:00:04.0/0000:08:00.0/host10"
      optrom_ctl          = 
      reset               = 
      uevent              = 
:

or if sysfsutils package is not installed

[root@axx /]# grep -v "zZzZ" -H /sys/class/fc_host/host*/port_state
/sys/class/fc_host/host0/port_state:Linkdown
/sys/class/fc_host/host1/port_state:Linkdown

 

device ethX does not seem to be present, delaying initialization..

Problem :device ethX does not seem to be present, delaying initialization…

Solution: fix the problem by deleting the /etc/udev/rules.d/70-persistent-net.rules file and restarting server.

Step 1:Restart network daemon.

#Service network restart

Shutting down loopback interface:
Bringing up loopback interface:
Bringing up interface eth0: Device eth0 does not seem to be present, delaying initialization.

Step 2:

#rm -rf /etc/udev/rules.d/70-persistent-net.rules

Step 3:

#reboot

If your server is virtualbox;

Solution:

step 1:copy the mac address from the inside file “/etc/sysconfig/network-scripts/ifcfg-eth0” to the host to virtualbox–>Settings–>network–>Adapter 1–>advanced–>MAc address

step 2:

#service network reload

 

 

How to Generate CSR in OpenSSL and SSL Certificate Installation on HAProxy on Linux.

How to generate CSR and private Key – OpenSSL

 

Make directory for CSR and private key.

root@loadbalancer:mkdir –p /etc/ssl/certs/pem/CSRandPrivateKey

root@loadbalancer:cd /etc/ssl/certs/pem/CSRandPrivateKey

root@loadbalancer:/etc/ssl/certs/pem/CSRandPrivateKey# openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout privatekey.key

Generating a 2048 bit RSA private key

……………………+++

………………………………………….+++

writing new private key to ‘privatekey.key’

—–

You are about to be asked to enter information that will be incorporated

into your certificate request.

What you are about to enter is what is called a Distinguished Name or a DN.

There are quite a few fields but you can leave some blank

For some fields there will be a default value,

If you enter ‘.’, the field will be left blank.

—–

Country Name (2 letter code) [AU]:TR

State or Province Name (full name) [Some-State]:Istanbul

Locality Name (eg, city) []:Maslak

Organization Name (eg, company) [Internet Widgits Pty Ltd]:Turizm Kampanyları Ltd.

Organizational Unit Name (eg, section) []:

Common Name (e.g. server FQDN or YOUR name) []:*.turizmkampanylari.com

Email Address []:

 

Please enter the following ‘extra’ attributes

to be sent with your certificate request

A challenge password []:

An optional company name []:

root@loadbalancer:/etc/ssl/certs/pem/CSRandPrivateKey# ll

total 20

drwxr-xr-x 2 root root 4096 Dec 23 15:12 ./

drwxr-xr-x 3 root root 4096 Dec 23 15:08 ../

-rw-r–r– 1 root root 1001 Dec 23 14:28 CSR.csr

-rw-r–r– 1 root root 1704 Dec 23 14:28 privatekey.key

After that you will send CSR.csr file to Certificate Authorities (like globalsign).Globalsign will send it back to you turizmk.crt extension file.

Create .pem file to install to Haproxy loadbalancer:

root@loadbalancer:/etc/ssl/certs/pem/CSRandPrivateKey# cat privatekey.key turizmk.crt > /etc/ssl/certs/pem/turizmk.pem

root@loadbalancer:/etc/ssl/certs/pem/CSRandPrivateKey# vi /etc/haproxy/haproxy.cfg 

frontend HTTPS_NLB
bind *:443 ssl crt /etc/ssl/certs/pem/turizmk.pem
reqadd X-Forwarded-Proto:\ https
rspadd Strict-Transport-Security:\ max-age=31536000

root@loadbalancer:/etc/ssl/certs/pem/CSRandPrivateKey# service haproxy restart
* Restarting haproxy haproxy
…done.

 

 

 

How to Create and Setup LUNs using LVM in “FC/iSCSI Target Server” on Suse/RHEL/CentOS/Fedora

Size Mounted on
150G  /
200G  /usr/sap
3.0T  /hana/data
1.0T  /hana/shared
1.0T  /hana/log
2.5T  /hana/backup
200G  /installation

Summary of commands;

lsb_release –a

mkdir  -p  /usr/sap

ll  /sys/class/scsi_host/host*

echo “- – -” > /sys/class/scsi_host/host0/scan

echo “- – -” > /sys/class/scsi_host/host1/scan

multipath –ll

pvcreate  /dev/mapper/360002ac0000000000000005f000198bf

vgcreate vgsdevhana  /dev/mapper/360002ac0000000000000005f000198bf

lvcreate -L +202G -n lv-usrsap vgsdevhana

ls -l /dev/vgsdevhana/*

mkfs.ext3 /dev/vgsdevhana/lv-usrsap

add /etc/fstab  /dev/vgsdevhana/lv-usrsap   /usr/sap             ext3    defaults 1 2

mount –a

df -Th

Step by Steps details of above commands are below;

*******************Check Operating System version**

serddad1:~ # lsb_release -a

LSB Version:    core-2.0-noarch:core-3.2-noarch:core-4.0-noarch:core-2.0-x86_64:core-3.2-x86_64:core-4.0-x86_64:desktop-4.0-amd64:desktop-4.0-noarch:graphics-2.0-amd64:graphics-2.0-noarch:graphics-3.2-amd64:graphics-3.2-noarch:graphics-4.0-amd64:graphics-4.0-noarch

Distributor ID: SUSE LINUX

Description:    SUSE Linux Enterprise Server 11 (x86_64)

Release:        11

Codename:       n/a

serddad1:~ #

 

**************Create Directory********************************

sdevhana01:~ # mkdir  -p /hana/data

sdevhana01:~ # mkdir -p  /hana/shared

sdevhana01:~ # mkdir -p  /hana/log

sdevhana01:~ # mkdir -p   /hana/backup

sdevhana01:~ # mkdir -p   /installation

****Linux Scan for New Scsi Device to Detect New Lun Without Reboot********

serddad1:~ # ll  /sys/class/scsi_host/host*

lrwxrwxrwx 1 root root 0 Dec 20 12:53 /sys/class/scsi_host/host0 -> ../../devices/pci0000:10/0000:10:03.0/0000:13:00.0/host0/scsi_host/host0

lrwxrwxrwx 1 root root 0 Dec 20 12:53 /sys/class/scsi_host/host1 -> ../../devices/pci0000:10/0000:10:03.0/0000:13:00.1/host1/scsi_host/host1

serddad1:~ # echo “- – -” > /sys/class/scsi_host/host0/scan

serddad1:~ # echo “- – -” > /sys/class/scsi_host/host1/scan

*************Check LUN*************************************

sdevhana01:~ # multipath -ll

 

360002ac0000000000000005f000198bf dm-5 3PARdata,VV

size=8.0T features=’0′ hwhandler=’0′ wp=rw

`-+- policy=’service-time 0′ prio=1 status=active

|- 1:0:0:1 sdc 8:32 active ready running

`- 1:0:1:1 sdd 8:48 active ready running

*************Create physical volume and check *********************

sdevhana01:~ # pvcreate /dev/mapper/360002ac0000000000000005f000198bf

Physical volume “/dev/mapper/360002ac0000000000000005f000198bf” successfully created

sdevhana01:~ # pvs

PV                                                  VG     Fmt  Attr PSize   PFree

/dev/mapper/360002ac0000000000000005e000198bf_part2 system lvm2 a–  269.83g 17.83g

/dev/mapper/360002ac0000000000000005f000198bf              lvm2 a–    8.00t  8.00t

**************Create Volume Group and check******************************

sdevha01 # vgcreate vgsdevhana  /dev/mapper/360002ac0000000000000005f000198bf

Volume group “vgsdevhana” successfully created

sdevhana01:~ #   vgs

VG         #PV #LV #SN Attr   VSize   VFree

system       1   2   0 wz–n- 269.83g 17.83g

vgsdevhana   1   0   0 wz–n-   8.00t  8.00t

sdevhana01:~ #

**************Create Logical Volume and check*********************

sdevhana01:~ # lvcreate -L +202G -n lv-usrsap vgsdevhana

Logical volume “lv-usrsap” created

sdevhana01:~ # lvs

LV        VG         Attr      LSize   Pool Origin Data%  Move Log Copy%  Convert

root      system     -wi-ao— 200.00g

swap      system     -wi-ao—  52.00g

lv-usrsap vgsdevhana -wi-a—- 202.00g

sdevhana01:~ # lvcreate -L +3T -n lv-data vgsdevhana

Logical volume “lv-data” created

sdevhana01:~ # lvs

LV        VG         Attr      LSize   Pool Origin Data%  Move Log Copy%  Convert

root      system     -wi-ao— 200.00g

swap      system     -wi-ao—  52.00g

lv-data   vgsdevhana -wi-a—-   3.00t

lv-usrsap vgsdevhana -wi-a—- 202.00g

sdevhana01:~ # lvcreate -L +1T -n lv-shared vgsdevhana

Logical volume “lv-shared” created

sdevhana01:~ # lvs

LV        VG         Attr      LSize   Pool Origin Data%  Move Log Copy%  Convert

root      system     -wi-ao— 200.00g

swap      system     -wi-ao—  52.00g

lv-data   vgsdevhana -wi-a—-   3.00t

lv-shared vgsdevhana -wi-a—-   1.00t

lv-usrsap vgsdevhana -wi-a—- 202.00g

sdevhana01:~ # lvcreate -L +2,5T -n lv-backup vgsdevhana

Invalid argument for –size: +2,5T

Error during parsing of command line.

sdevhana01:~ # lvcreate -L +2.5T -n lv-backup vgsdevhana

Logical volume “lv-backup” created

sdevhana01:~ # lvs

LV        VG         Attr      LSize   Pool Origin Data%  Move Log Copy%  Convert

root      system     -wi-ao— 200.00g

swap      system     -wi-ao—  52.00g

lv-backup vgsdevhana -wi-a—-   2.50t

lv-data   vgsdevhana -wi-a—-   3.00t

lv-shared vgsdevhana -wi-a—-   1.00t

lv-usrsap vgsdevhana -wi-a—- 202.00g

sdevhana01:~ # lvcreate -L +2.5T -n lv-setup vgsdevhana

Volume group “vgsdevhana” has insufficient free space (341500 extents): 655360 required.

sdevhana01:~ # lvcreate -L +201G -n lv-setup vgsdevhana

Logical volume “lv-setup” created

sdevhana01:~ # lvs

LV        VG         Attr      LSize   Pool Origin Data%  Move Log Copy%  Convert

root      system     -wi-ao— 200.00g

swap      system     -wi-ao—  52.00g

lv-backup vgsdevhana -wi-a—-   2.50t

lv-data   vgsdevhana -wi-a—-   3.00t

lv-setup  vgsdevhana -wi-a—- 201.00g

lv-shared vgsdevhana -wi-a—-   1.00t

lv-usrsap vgsdevhana -wi-a—- 202.00g

sdevhana01:~ # lvcreate -L +1T -n lv-log vgsdevhana

Logical volume “lv-log” created

 

************************** format with ext3 *******************

/You can possible to format with ext4 or xfs/

sdevhana01:~ # ls -l /dev/vgsdevhana/*

lrwxrwxrwx 1 root root 7 Dec 20 10:42 /dev/vgsdevhana/lv-backup -> ../dm-9

lrwxrwxrwx 1 root root 7 Dec 20 10:39 /dev/vgsdevhana/lv-data -> ../dm-7

lrwxrwxrwx 1 root root 8 Dec 20 10:44 /dev/vgsdevhana/lv-log -> ../dm-11

lrwxrwxrwx 1 root root 8 Dec 20 10:43 /dev/vgsdevhana/lv-setup -> ../dm-10

lrwxrwxrwx 1 root root 7 Dec 20 10:42 /dev/vgsdevhana/lv-shared -> ../dm-8

lrwxrwxrwx 1 root root 7 Dec 20 10:38 /dev/vgsdevhana/lv-usrsap -> ../dm-6

sdevhana01:~ # mkfs.ext3 /dev/vgsdevhana/lv-backup

mke2fs 1.41.9 (22-Aug-2009)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

167772160 inodes, 671088640 blocks

33554432 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

20480 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,

102400000, 214990848, 512000000, 550731776, 644972544

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 29 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

sdevhana01:~ # mkfs.ext3 /dev/vgsdevhana/lv-data

mke2fs 1.41.9 (22-Aug-2009)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

201326592 inodes, 805306368 blocks

40265318 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

24576 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,

102400000, 214990848, 512000000, 550731776, 644972544

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 28 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

sdevhana01:~ # mkfs.ext3 /dev/vgsdevhana/lv-log

mke2fs 1.41.9 (22-Aug-2009)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

67108864 inodes, 268435456 blocks

13421772 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

8192 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,

102400000, 214990848

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 31 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

sdevhana01:~ # mkfs.ext3 /dev/vgsdevhana/lv-setup

mke2fs 1.41.9 (22-Aug-2009)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

13172736 inodes, 52690944 blocks

2634547 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

1608 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 28 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

sdevhana01:~ # mkfs.ext3 /dev/vgsdevhana/lv-shared

mke2fs 1.41.9 (22-Aug-2009)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

67108864 inodes, 268435456 blocks

13421772 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

8192 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,

102400000, 214990848

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 37 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

sdevhana01:~ # mkfs.ext3 /dev/vgsdevhana/lv-usrsap

mke2fs 1.41.9 (22-Aug-2009)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

13238272 inodes, 52953088 blocks

2647654 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

1616 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 22 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

sdevhana01:~ #

************************Add /etc/fstab and mount it************************

 

sdevhana01:~ # vim /etc/fstab

 

/dev/system/swap     swap                 swap       defaults              0 0

/dev/system/root     /                    ext3       acl,user_xattr        1 1

/dev/disk/by-id/scsi-360002ac0000000000000005e000198bf-part1 /boot/efi            vfat       umask=0002,utf8=true  0 0

proc                 /proc                proc       defaults              0 0

sysfs                /sys                 sysfs      noauto                0 0

debugfs              /sys/kernel/debug    debugfs    noauto                0 0

usbfs                /proc/bus/usb        usbfs      noauto                0 0

devpts               /dev/pts             devpts     mode=0620,gid=5       0 0

/dev/vgsdevhana/lv-usrsap     /usr/sap                    ext3    defaults 1 2

/dev/vgsdevhana/lv-data         /hana/data               ext3    defaults 1 2

/dev/vgsdevhana/lv-shared     /hana/shared          ext3    defaults 1 2

/dev/vgsdevhana/lv-log            /hana/log                  ext3    defaults 1 2

/dev/vgsdevhana/lv-backup     /hana/backup        ext3    defaults 1 2

/dev/vgsdevhana/lv-setup        /installation           ext3    defaults 1 2 

~

 

“/etc/fstab” 14L, 1037C written

sdevhana01:~ # mount -a

sdevhana01:~ # df -Th

Filesystem                                          Type   Size  Used Avail Use% Mounted on

/dev/mapper/system-root                             ext3   197G  3.3G  184G   2% /

udev                                                tmpfs  505G  164K  505G   1% /dev

tmpfs                                               tmpfs  505G   84K  505G   1% /dev/shm

/dev/mapper/360002ac0000000000000005e000198bf_part1 vfat   157M   14M  144M   9% /boot/efi

/dev/mapper/vgsdevhana-lv–usrsap                   ext3   199G  188M  189G   1% /usr/sap

/dev/mapper/vgsdevhana-lv–data                     ext3   3.0T  200M  2.9T   1% /hana/data

/dev/mapper/vgsdevhana-lv–shared                   ext3  1008G  200M  957G   1% /hana/shared

/dev/mapper/vgsdevhana-lv–log                      ext3  1008G  200M  957G   1% /hana/log

/dev/mapper/vgsdevhana-lv–backup                   ext3   2.5T  203M  2.4T   1% /hana/backup

/dev/mapper/vgsdevhana-lv–setup                    ext3   198G  188M  188G   1% /installation

sdevhana01:~ #

**************************