How to seperate IP-based production and test database on exadata?is it possible?

Multiple public networks in the same cluster for Production,test and EBS database

Exadata our infrastructure;
/u01——–>Production database mount point
/u02——–>Test database mount point
192.168.90.3 exa01db01-vip
192.168.90.5 exa01db02-vip
exa01-scan IP
192.168.90.6
192.168.90.7
192.168.90.8

Question 1) Is it possible to change scan or vip name and like this?
Exa01live-scan —–>192.168.90.6 ve 192.168.90.7 user connect only PRODUCTION DATABASE with that IPs.
Exa01test-scan ——>192.168.90.8 user connect only TEST DATABASE with that IPs.

Solution:

You can create multiple networks and configure multiple scans for different database to use it. Please find the below document for steps.Note this article talks for ODA.But the steps are same for any 12.1.0.2 Cluster.
ODA (Oracle Database Appliance): HowTo Configure Multiple Public Network on GI (Grid Infrastructure) 12c ( Doc ID 2101109.1 )

 

Advertisements

How To install rsh, rlogin, rexec for Data protector/Netbackup client.

Install rsh and rshd using yum command:

[root@cintqasrv01 ~]# yum -y install rsh rsh-server
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
Updating certificate-based repositories.
Unable to read consumer identity
Setting up Install Process
Resolving Dependencies
–> Running transaction check
—> Package rsh.x86_64 0:0.17-60.el6 will be installed
—> Package rsh-server.x86_64 0:0.17-60.el6 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================================
Package Arch Version Repository Size
====================================================================================================================================
Installing:
rsh x86_64 0.17-60.el6 nas 48 k
rsh-server x86_64 0.17-60.el6 nas 42 k

Transaction Summary
====================================================================================================================================
Install 2 Package(s)

Total download size: 90 k
Installed size: 141 k
Downloading Packages:
————————————————————————————————————————————
Total 2.9 MB/s | 90 kB 00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : rsh-0.17-60.el6.x86_64 1/2
Installing : rsh-server-0.17-60.el6.x86_64 2/2
Installed products updated.
Verifying : rsh-server-0.17-60.el6.x86_64 1/2
Verifying : rsh-0.17-60.el6.x86_64 2/2

Installed:
rsh.x86_64 0:0.17-60.el6 rsh-server.x86_64 0:0.17-60.el6

Complete!

 

Enable rsh (or rlogin, …)

The option “disable” set to “no“.

[root@cintqasrv01 ~]# vi /etc/xinetd.d/rsh

# default: on
# description: The rshd server is the server for the rcmd(3) routine and, \
# consequently, for the rsh(1) program. The server provides \
# remote execution facilities with authentication based on \
# privileged port numbers from trusted hosts.
service shell
{
socket_type = stream
wait = no
user = root
log_on_success += USERID
log_on_failure += USERID
server = /usr/sbin/in.rshd
disable = no
}

Restart your  “xinetd” daemon:

[root@cintqasrv01 ~]# service xinetd restart
Stopping xinetd: [ OK ]
Starting xinetd: [ OK ]

Append rsh, rlogin, rexec in  /etc/securetty file.

[root@cintqasrv01 ~]# vi /etc/securetty

console
vc/1
vc/2
vc/3
vc/4
vc/5
vc/6
vc/7
vc/8
vc/9
vc/10
vc/11
tty1
tty2
tty3
tty4
tty5
tty6
tty7
tty8
tty9
tty10
tty11
rsh
rlogin
rexec

Add client server and backup server on /etc/hosts file.

[root@cintqasrv01 ~]# vim /etc/hosts
10.80.0.13 client01.turizmkampanyalari.com.tr client01
10.90.0.52 tkbcksrv1..turizmkampanyalari.com.tr tkbcksrv1

Add the line(backup server and user) on target(client) machine.

[root@cintqasrv01 ~]# vi ~/.rhosts

tkbcksrv1   root

Check /etc/pam.d/rsh file like below.

[root@cintqasrv01 ~]# vim  /etc/pam.d/rsh
#%PAM-1.0
# For root login to succeed here with pam_securetty, “rsh” must be
# listed in /etc/securetty.
auth required pam_nologin.so
auth required pam_securetty.so
auth required pam_env.so
auth required pam_rhosts.so
account include password-auth
session optional pam_keyinit.so force revoke
session required pam_loginuid.so
session include password-auth

Disable firewall and SELinux security settings.

[root@cintqasrv01 ~]# chkconfig iptables off
[root@cintqasrv01 ~]# /etc/init.d/iptables stop
[root@cintqasrv01 ~]# vim /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing – SELinux security policy is enforced.
# permissive – SELinux prints warnings instead of enforcing.
# disabled – No SELinux policy is loaded.
SELINUX=disable
# SELINUXTYPE= can take one of these two values:
# targeted – Targeted processes are protected,
# mls – Multi Level Security protection.
SELINUXTYPE=targeted

You can install Data Protector  or Netbackup client properly.

How to Generate CSR in OpenSSL and SSL Certificate Installation on HAProxy on Linux.

How to generate CSR and private Key – OpenSSL

 

Make directory for CSR and private key.

root@loadbalancer:mkdir –p /etc/ssl/certs/pem/CSRandPrivateKey

root@loadbalancer:cd /etc/ssl/certs/pem/CSRandPrivateKey

root@loadbalancer:/etc/ssl/certs/pem/CSRandPrivateKey# openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout privatekey.key

Generating a 2048 bit RSA private key

……………………+++

………………………………………….+++

writing new private key to ‘privatekey.key’

—–

You are about to be asked to enter information that will be incorporated

into your certificate request.

What you are about to enter is what is called a Distinguished Name or a DN.

There are quite a few fields but you can leave some blank

For some fields there will be a default value,

If you enter ‘.’, the field will be left blank.

—–

Country Name (2 letter code) [AU]:TR

State or Province Name (full name) [Some-State]:Istanbul

Locality Name (eg, city) []:Maslak

Organization Name (eg, company) [Internet Widgits Pty Ltd]:Turizm Kampanyları Ltd.

Organizational Unit Name (eg, section) []:

Common Name (e.g. server FQDN or YOUR name) []:*.turizmkampanylari.com

Email Address []:

 

Please enter the following ‘extra’ attributes

to be sent with your certificate request

A challenge password []:

An optional company name []:

root@loadbalancer:/etc/ssl/certs/pem/CSRandPrivateKey# ll

total 20

drwxr-xr-x 2 root root 4096 Dec 23 15:12 ./

drwxr-xr-x 3 root root 4096 Dec 23 15:08 ../

-rw-r–r– 1 root root 1001 Dec 23 14:28 CSR.csr

-rw-r–r– 1 root root 1704 Dec 23 14:28 privatekey.key

After that you will send CSR.csr file to Certificate Authorities (like globalsign).Globalsign will send it back to you turizmk.crt extension file.

Create .pem file to install to Haproxy loadbalancer:

root@loadbalancer:/etc/ssl/certs/pem/CSRandPrivateKey# cat privatekey.key turizmk.crt > /etc/ssl/certs/pem/turizmk.pem

root@loadbalancer:/etc/ssl/certs/pem/CSRandPrivateKey# vi /etc/haproxy/haproxy.cfg 

frontend HTTPS_NLB
bind *:443 ssl crt /etc/ssl/certs/pem/turizmk.pem
reqadd X-Forwarded-Proto:\ https
rspadd Strict-Transport-Security:\ max-age=31536000

root@loadbalancer:/etc/ssl/certs/pem/CSRandPrivateKey# service haproxy restart
* Restarting haproxy haproxy
…done.

 

 

 

Network Ports Used in Oracle Enterprise Manager 12c

These ports will be used in every Enterprise Manager 12c installation and will require firewall and/or ACL modifications if your network is restricted.

OEM12C Server:172.76.1.100
Production Db Server:172.76.10.4
Test Db Server:172.76.20.4
Source:172.76.1.100                            —->Destination:172.76.10.4,172.76.20.4
Source:172.76.10.4,172.76.20.4      —->172.76.1.100
Ports:Below

MS-SQL Server(if you have MS-SQL server) monitoring ports:

Sqlnet1                                                –>          tcp 1521

MS-SQL-Monitor                            –>          tcp 1434

MS-SQL-Monitor_UDP                 –>          udp 1434

MS-SQL-Server                                –>          tcp 1433

MS-SQL-Server_UDP                    –>          udp 1433

Network Ports Used in Oracle Enterprise Manager 12c;

Enterprise Manager Upload Http Port                                  –>tcp   4889

Enterprise Manager Upload Http SSL Port                          –>tcp   4903

Enterprise Manager Central Console Http SSL Port         –>tcp   7802

Node Manager Http SSL Port                                                    –>tcp   7403

Managed Server Http Port                                                         –>tcp   7202

Enterprise Manager Central Console Http Port                 –>tcp   7788

Oracle Management Agent Port                                              –>tcp   3872

Admin Server Http SSL Port                                                     –>tcp   7101-7102

Managed Server Http SSL Port                                                 –>tcp   7301

Enterprise Manager OHS Upload HTTP SSL                          –>tcp   1159

EM OHS Central Console HTTP SSL (Apache/UI)                 –>tcp  7799

Database Targets –  SQL*Net Listener (Depends on Listener Configuration)                             –                                                                                                              –>          tcp 1521-1522

 

Resource:https://blogs.oracle.com/oem/entry/planning_your_oracle_entperprise_manager

How to Create and Setup LUNs using LVM in “FC/iSCSI Target Server” on Suse/RHEL/CentOS/Fedora

Size Mounted on
150G  /
200G  /usr/sap
3.0T  /hana/data
1.0T  /hana/shared
1.0T  /hana/log
2.5T  /hana/backup
200G  /installation

Summary of commands;

lsb_release –a

mkdir  -p  /usr/sap

ll  /sys/class/scsi_host/host*

echo “- – -” > /sys/class/scsi_host/host0/scan

echo “- – -” > /sys/class/scsi_host/host1/scan

multipath –ll

pvcreate  /dev/mapper/360002ac0000000000000005f000198bf

vgcreate vgsdevhana  /dev/mapper/360002ac0000000000000005f000198bf

lvcreate -L +202G -n lv-usrsap vgsdevhana

ls -l /dev/vgsdevhana/*

mkfs.ext3 /dev/vgsdevhana/lv-usrsap

add /etc/fstab  /dev/vgsdevhana/lv-usrsap   /usr/sap             ext3    defaults 1 2

mount –a

df -Th

Step by Steps details of above commands are below;

*******************Check Operating System version**

serddad1:~ # lsb_release -a

LSB Version:    core-2.0-noarch:core-3.2-noarch:core-4.0-noarch:core-2.0-x86_64:core-3.2-x86_64:core-4.0-x86_64:desktop-4.0-amd64:desktop-4.0-noarch:graphics-2.0-amd64:graphics-2.0-noarch:graphics-3.2-amd64:graphics-3.2-noarch:graphics-4.0-amd64:graphics-4.0-noarch

Distributor ID: SUSE LINUX

Description:    SUSE Linux Enterprise Server 11 (x86_64)

Release:        11

Codename:       n/a

serddad1:~ #

 

**************Create Directory********************************

sdevhana01:~ # mkdir  -p /hana/data

sdevhana01:~ # mkdir -p  /hana/shared

sdevhana01:~ # mkdir -p  /hana/log

sdevhana01:~ # mkdir -p   /hana/backup

sdevhana01:~ # mkdir -p   /installation

****Linux Scan for New Scsi Device to Detect New Lun Without Reboot********

serddad1:~ # ll  /sys/class/scsi_host/host*

lrwxrwxrwx 1 root root 0 Dec 20 12:53 /sys/class/scsi_host/host0 -> ../../devices/pci0000:10/0000:10:03.0/0000:13:00.0/host0/scsi_host/host0

lrwxrwxrwx 1 root root 0 Dec 20 12:53 /sys/class/scsi_host/host1 -> ../../devices/pci0000:10/0000:10:03.0/0000:13:00.1/host1/scsi_host/host1

serddad1:~ # echo “- – -” > /sys/class/scsi_host/host0/scan

serddad1:~ # echo “- – -” > /sys/class/scsi_host/host1/scan

*************Check LUN*************************************

sdevhana01:~ # multipath -ll

 

360002ac0000000000000005f000198bf dm-5 3PARdata,VV

size=8.0T features=’0′ hwhandler=’0′ wp=rw

`-+- policy=’service-time 0′ prio=1 status=active

|- 1:0:0:1 sdc 8:32 active ready running

`- 1:0:1:1 sdd 8:48 active ready running

*************Create physical volume and check *********************

sdevhana01:~ # pvcreate /dev/mapper/360002ac0000000000000005f000198bf

Physical volume “/dev/mapper/360002ac0000000000000005f000198bf” successfully created

sdevhana01:~ # pvs

PV                                                  VG     Fmt  Attr PSize   PFree

/dev/mapper/360002ac0000000000000005e000198bf_part2 system lvm2 a–  269.83g 17.83g

/dev/mapper/360002ac0000000000000005f000198bf              lvm2 a–    8.00t  8.00t

**************Create Volume Group and check******************************

sdevha01 # vgcreate vgsdevhana  /dev/mapper/360002ac0000000000000005f000198bf

Volume group “vgsdevhana” successfully created

sdevhana01:~ #   vgs

VG         #PV #LV #SN Attr   VSize   VFree

system       1   2   0 wz–n- 269.83g 17.83g

vgsdevhana   1   0   0 wz–n-   8.00t  8.00t

sdevhana01:~ #

**************Create Logical Volume and check*********************

sdevhana01:~ # lvcreate -L +202G -n lv-usrsap vgsdevhana

Logical volume “lv-usrsap” created

sdevhana01:~ # lvs

LV        VG         Attr      LSize   Pool Origin Data%  Move Log Copy%  Convert

root      system     -wi-ao— 200.00g

swap      system     -wi-ao—  52.00g

lv-usrsap vgsdevhana -wi-a—- 202.00g

sdevhana01:~ # lvcreate -L +3T -n lv-data vgsdevhana

Logical volume “lv-data” created

sdevhana01:~ # lvs

LV        VG         Attr      LSize   Pool Origin Data%  Move Log Copy%  Convert

root      system     -wi-ao— 200.00g

swap      system     -wi-ao—  52.00g

lv-data   vgsdevhana -wi-a—-   3.00t

lv-usrsap vgsdevhana -wi-a—- 202.00g

sdevhana01:~ # lvcreate -L +1T -n lv-shared vgsdevhana

Logical volume “lv-shared” created

sdevhana01:~ # lvs

LV        VG         Attr      LSize   Pool Origin Data%  Move Log Copy%  Convert

root      system     -wi-ao— 200.00g

swap      system     -wi-ao—  52.00g

lv-data   vgsdevhana -wi-a—-   3.00t

lv-shared vgsdevhana -wi-a—-   1.00t

lv-usrsap vgsdevhana -wi-a—- 202.00g

sdevhana01:~ # lvcreate -L +2,5T -n lv-backup vgsdevhana

Invalid argument for –size: +2,5T

Error during parsing of command line.

sdevhana01:~ # lvcreate -L +2.5T -n lv-backup vgsdevhana

Logical volume “lv-backup” created

sdevhana01:~ # lvs

LV        VG         Attr      LSize   Pool Origin Data%  Move Log Copy%  Convert

root      system     -wi-ao— 200.00g

swap      system     -wi-ao—  52.00g

lv-backup vgsdevhana -wi-a—-   2.50t

lv-data   vgsdevhana -wi-a—-   3.00t

lv-shared vgsdevhana -wi-a—-   1.00t

lv-usrsap vgsdevhana -wi-a—- 202.00g

sdevhana01:~ # lvcreate -L +2.5T -n lv-setup vgsdevhana

Volume group “vgsdevhana” has insufficient free space (341500 extents): 655360 required.

sdevhana01:~ # lvcreate -L +201G -n lv-setup vgsdevhana

Logical volume “lv-setup” created

sdevhana01:~ # lvs

LV        VG         Attr      LSize   Pool Origin Data%  Move Log Copy%  Convert

root      system     -wi-ao— 200.00g

swap      system     -wi-ao—  52.00g

lv-backup vgsdevhana -wi-a—-   2.50t

lv-data   vgsdevhana -wi-a—-   3.00t

lv-setup  vgsdevhana -wi-a—- 201.00g

lv-shared vgsdevhana -wi-a—-   1.00t

lv-usrsap vgsdevhana -wi-a—- 202.00g

sdevhana01:~ # lvcreate -L +1T -n lv-log vgsdevhana

Logical volume “lv-log” created

 

************************** format with ext3 *******************

/You can possible to format with ext4 or xfs/

sdevhana01:~ # ls -l /dev/vgsdevhana/*

lrwxrwxrwx 1 root root 7 Dec 20 10:42 /dev/vgsdevhana/lv-backup -> ../dm-9

lrwxrwxrwx 1 root root 7 Dec 20 10:39 /dev/vgsdevhana/lv-data -> ../dm-7

lrwxrwxrwx 1 root root 8 Dec 20 10:44 /dev/vgsdevhana/lv-log -> ../dm-11

lrwxrwxrwx 1 root root 8 Dec 20 10:43 /dev/vgsdevhana/lv-setup -> ../dm-10

lrwxrwxrwx 1 root root 7 Dec 20 10:42 /dev/vgsdevhana/lv-shared -> ../dm-8

lrwxrwxrwx 1 root root 7 Dec 20 10:38 /dev/vgsdevhana/lv-usrsap -> ../dm-6

sdevhana01:~ # mkfs.ext3 /dev/vgsdevhana/lv-backup

mke2fs 1.41.9 (22-Aug-2009)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

167772160 inodes, 671088640 blocks

33554432 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

20480 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,

102400000, 214990848, 512000000, 550731776, 644972544

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 29 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

sdevhana01:~ # mkfs.ext3 /dev/vgsdevhana/lv-data

mke2fs 1.41.9 (22-Aug-2009)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

201326592 inodes, 805306368 blocks

40265318 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

24576 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,

102400000, 214990848, 512000000, 550731776, 644972544

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 28 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

sdevhana01:~ # mkfs.ext3 /dev/vgsdevhana/lv-log

mke2fs 1.41.9 (22-Aug-2009)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

67108864 inodes, 268435456 blocks

13421772 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

8192 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,

102400000, 214990848

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 31 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

sdevhana01:~ # mkfs.ext3 /dev/vgsdevhana/lv-setup

mke2fs 1.41.9 (22-Aug-2009)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

13172736 inodes, 52690944 blocks

2634547 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

1608 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 28 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

sdevhana01:~ # mkfs.ext3 /dev/vgsdevhana/lv-shared

mke2fs 1.41.9 (22-Aug-2009)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

67108864 inodes, 268435456 blocks

13421772 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

8192 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,

102400000, 214990848

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 37 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

sdevhana01:~ # mkfs.ext3 /dev/vgsdevhana/lv-usrsap

mke2fs 1.41.9 (22-Aug-2009)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

13238272 inodes, 52953088 blocks

2647654 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

1616 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 22 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

sdevhana01:~ #

************************Add /etc/fstab and mount it************************

 

sdevhana01:~ # vim /etc/fstab

 

/dev/system/swap     swap                 swap       defaults              0 0

/dev/system/root     /                    ext3       acl,user_xattr        1 1

/dev/disk/by-id/scsi-360002ac0000000000000005e000198bf-part1 /boot/efi            vfat       umask=0002,utf8=true  0 0

proc                 /proc                proc       defaults              0 0

sysfs                /sys                 sysfs      noauto                0 0

debugfs              /sys/kernel/debug    debugfs    noauto                0 0

usbfs                /proc/bus/usb        usbfs      noauto                0 0

devpts               /dev/pts             devpts     mode=0620,gid=5       0 0

/dev/vgsdevhana/lv-usrsap     /usr/sap                    ext3    defaults 1 2

/dev/vgsdevhana/lv-data         /hana/data               ext3    defaults 1 2

/dev/vgsdevhana/lv-shared     /hana/shared          ext3    defaults 1 2

/dev/vgsdevhana/lv-log            /hana/log                  ext3    defaults 1 2

/dev/vgsdevhana/lv-backup     /hana/backup        ext3    defaults 1 2

/dev/vgsdevhana/lv-setup        /installation           ext3    defaults 1 2 

~

 

“/etc/fstab” 14L, 1037C written

sdevhana01:~ # mount -a

sdevhana01:~ # df -Th

Filesystem                                          Type   Size  Used Avail Use% Mounted on

/dev/mapper/system-root                             ext3   197G  3.3G  184G   2% /

udev                                                tmpfs  505G  164K  505G   1% /dev

tmpfs                                               tmpfs  505G   84K  505G   1% /dev/shm

/dev/mapper/360002ac0000000000000005e000198bf_part1 vfat   157M   14M  144M   9% /boot/efi

/dev/mapper/vgsdevhana-lv–usrsap                   ext3   199G  188M  189G   1% /usr/sap

/dev/mapper/vgsdevhana-lv–data                     ext3   3.0T  200M  2.9T   1% /hana/data

/dev/mapper/vgsdevhana-lv–shared                   ext3  1008G  200M  957G   1% /hana/shared

/dev/mapper/vgsdevhana-lv–log                      ext3  1008G  200M  957G   1% /hana/log

/dev/mapper/vgsdevhana-lv–backup                   ext3   2.5T  203M  2.4T   1% /hana/backup

/dev/mapper/vgsdevhana-lv–setup                    ext3   198G  188M  188G   1% /installation

sdevhana01:~ #

**************************

 

 

 

 

 

How to extend a LVM swap volume?

In this case, increase by 20G:

 

Summary of commands;

cat /etc/fstab | grep swap

swapoff /dev/system/swap

lvextend -L +20G /dev/system/swap

mkswap /dev/system/swap

swapon /dev/system/swap

free -g

 

Details of above commands are below;

sdevhana01:~ # free -g

total       used       free     shared    buffers     cached

Mem:          1009         10        999          0          0          0

-/+ buffers/cache:          9        999

Swap:           31          0         31

sdevhana01:~ # cat /etc/fstab | grep swap

/dev/system/swap     swap                 swap       defaults              0 0

sdevhana01:~ # swapoff /dev/system/swap

sdevhana01:~ # free -g

total       used       free     shared    buffers     cached

Mem:          1009         10        999          0          0          0

-/+ buffers/cache:          9        999

Swap:            0          0          0

sdevhana01:~ # vgs

VG     #PV #LV #SN Attr   VSize   VFree

  system   1   2   0 wz–n- 269.83g 35.83g

sdevhana01:~ # lvextend -L +20G /dev/system/swap

Extending logical volume swap to 52.00 GiB

Logical volume swap successfully resized

sdevhana01:~ # mkswap /dev/system/swap

mkswap: /dev/system/swap: warning: don’t erase bootbits sectors

on whole disk. Use -f to force.

Setting up swapspace version 1, size = 54525948 KiB

no label, UUID=14a35aeb-6e1a-4848-a7ca-eb315e905d31

sdevhana01:~ # swapon /dev/system/swap

sdevhana01:~ # free -g

total       used       free     shared    buffers     cached

Mem:          1009         10        999          0          0          0

-/+ buffers/cache:          9        999

Swap:           51          0         51

sdevhana01:~ #

How to install ftp server and create a FTP user with specific directory access only on a Linux

Install  ftp server :

yum install vsftpd
service vsftpd start
chkconfig vsftpd on

Configure VSFTP:

Take backup of  /etc/vsftpd/vsftpd.conf configuration file in linux server and add these options to your config file.

mv  /etc/vsftpd/vsftpd.conf  /etc/vsftpd/vsftpd.conf.old

vim  /etc/vsftpd/vsftpd.conf

anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=022
anon_umask=077
anon_upload_enable=YES
dirmessage_enable=YES
xferlog_enable=YES
connect_from_port_20=YES
chown_uploads=YES
chown_username=daemon
xferlog_std_format=YES
listen=YES

pam_service_name=vsftpd
userlist_enable=YES
tcp_wrappers=YES
chroot_local_user=YES

Restart the vsftpd:

  service vsftpd restart

Create user and specific path and ftp user only connect that path.

Example:

username:ftp_domain

groupname:ftp_domain

path=/home/ftp_domain

Configuration;

[root@a]# groupadd ftp_domain
[root@a]# useradd -g ftp_domain -d /home/ftp_domain -s /sbin/nologin ftp_domain
[root@a]# passwd ftp_domain
Changing password for user ftp_domain.
New password:
BAD PASSWORD: it is too simplistic/systematic
Retype new password:
passwd: all authentication tokens updated successfully.
[root@a]# cat /etc/passwd | grep ftp_domain
ftp_domain:x:5004:5005::/home/ftp_domain:/sbin/nologin
[root@a]# cat /etc/group | grep ftp_domain
ftp_domain:x:5005:
[root@a]#

Test it with filezilla ftp client:

 

fz

[root@a]# ls -l /home/ftp_domain/
total 4
drwxr-xr-x 2 ftp_domain ftp_domain 4096 Dec 15 17:43 images
[root@a]#