Showing posts with label Configuration. Show all posts
Showing posts with label Configuration. Show all posts

Wednesday, February 15, 2017

Step by Step NFS Server and Client Configuration in HP-UX

In this post, I will explain step by step method of NFS server and client configuration on HP-UX operating system. Network File System is a distriubted file system which allows a client computer to access a file system that shared on another computer’s or server's.

NFS server is configured on the server locally on attached physically disk. On my HP-UX server i will configured the NFS server and for client you can use same HP-UX operating machine or other linux machine.

We can used the below NFS server IP and client IP on my own HP-UX machine and Linux machine.

NFS Server : 10.135.0.27 and share folder /backup
NFS Client : 10.135.0.2 and mount point /home/backup

Steps involved to configure the NFS Server:

1. First we will make sure both NFS server(10.135.0.27) and client(10.135.0.2) are accessible. After that we will make an entry in /etc/hosts or Configure it in DNS to resolve the IP if you use server name instead of IP address. But in our case we will use IP address instead of name.

hpx:/>vi /etc/hosts

10.135.0.27 hpxnfsserver
10.135.0.2 hpxnfsclient

The /etc/hosts entry would we same on both server and client machine so it can be accessible to each other.


2. In second step we will identify the folder to be export from server to the client. On my HP-UX server machine, I am going to export the folder /backup.

3. Now we will change the shared folder permission, normally we don't provide the read/write access to shared folder but in my case for testing purpose i will provide the (777) permission so that NFS client user can read/write the data on shared folder.

hpx:/>#chmod 777 /backup

4. In this step we change the NFS_SERVER=1 to enable your server to act as a NFS Server in the configuration file /etc/rc.config.d/nfsconf

hpx:/>vi /etc/rc.config.d/nfsconf
Change the NFS_SERVER parameter as follow.
NFS_SERVER=1
PCNFS_SERVER=0
START_NFSLOGD=0
START_MOUNTD=1
MOUNTD_OPTIONS=""
Save and exit the file.

5. In final step we will add an entry of our shared folder in the export file /etc/dfs/dfstab with proper permission.

#vi /etc/dfs/dfstab
/usr/sbin/share -F nfs -o rw -d “Test directory” /backup

Save and exit the file.

If you see the above line "/backup is shared folder and we will provide the "rw" read/write access to this folder.

6. Now please restart the nfs service on NFS server and make it permanent so after reboot of HP-UX server the shared folder is not umount.

hpx:/>/sbin/init.d/nfs.server stop
hpx:/>/sbin/init.d/nfs.server start

7. Using below command we can check the NFS folder is shared or not and also we can ensure which client can access this machine.

hpx:/>/usr/sbin/exportfs -av

shareall -F nfs

Steps involved to access the NFS shared folder to the NFS clients:

1. Now for NFS client configuration i will login on my Linux machine.In my case i will use my Centos machine as a client, You can select your NFS client machine as per your requirement.

2. On client machine, please select the mount point to where we have to mount the NFS shared folder /backup which we exported from NFS Server 10.135.0.27. On my Linux machine, I will mount it on /home/backup.

3. In this step we will verify the list of shares available on client machine. Please use the below "showmount" command to check which NFS shared folder is accessible.

hpx:/>showmount -e 10.135.0.27
export list for 10.135.0.27
/backup

If we get the output, server side export configuration doesnt have any problem between nfs server and client.

3. In next step we will mount the nfs share locally on our desintation mount point folder.

hpx:/>mount 10.135.0.27:/backup /home/backup

4. In final step, you can check the NFS mount folder using "bdf" command. It is show you the mount folder where all existing data is store. For testing purpose you can create any file on /home/backup.

Hope, you will like my this post, it is cover all the installation and configuration part step by step. Please let me know if you encouter any issue during NFS configuration on HP-UX machine.

Monday, February 6, 2017

How to create local zone in Solaris 10

Hello Friends,

In this post, I will explain you how to create a new local zone on Sun Solaris operating system. Before describe the step by step installation and configuration method we need to understand what is zone and where it is used.

Basically Zone is a virtual operating system environment created within a single instance of the Solaris operating system. The mail goal of this technology is efficient resource utilization. We can create multiple zone on one Solaris operating system.
Solaris 10's zone partitioning technology can be used to create local zones that behave like virtual servers. All local zones are controlled from the system's global zone. Processes running in a zone are completely isolated from the rest of the system.


Note- That processes running in a local zone can be monitored from global zone but the processes running in a global zone or even in another local zone cannot be monitored from a local zone.

Global Zone: When we install the Solaris 10 operating system, a global zone gets installed automatically, and the core operating system runs under global zone as well as all the local zone are also runs on same global zone. Using "zoneadm" command we can check list of all configured zones which are running on Solaris operating system.


# zoneadm list -v

  ID NAME             STATUS         PATH
   0 global           running        /

Step by step method to create a Local Zone:

When we create a local Solaris zone on global zone , we have to complete some prerequisite before installing the zone.

Prerequisites:  A lot of disk space is required to installed the newly zone. It needs at least 3 GB space to copy the essentials files to the local zone. In my case I normally use 10 GB free disk space to installed the local zone. Also for configuration we required a dedicated IP for network connectivity.

1. First we will check the disk space and network configuration by using running below command.

[sun]# df -h /
 Filesystem             size   used  avail capacity  Mounted on
 /dev/dsk/c1t1d0s0       50G    22G   28G    46%    /

[sun] # ifconfig -a
 lo0: flags=2001000849 mtu 8232 index 1   
 inet 127.0.0.1 netmask ff000000  
 em0: flags=1000843 mtu 1500 index 2   
 inet 10.135.0.23 netmask fffffe00 broadcast 10.135.0.255

Here, if you see the "df -h" command output we can found that the disk "c1t1d0s0" is mounted on the root file system. Currently the total disk space size of root partition is approx 50 GB, as we required 10 GB free space for installation and configuration of local zone, so free space on root partition is sufficient for zone installation.

In "ifconfig" command output we can able to see the the ip address of global zone.

2. As we have sufficient space on the server so we can go ahead for local zone installation. First we need to create a directory where we want to install the zone. All the files is keep in this folder only.

[sun]# mkdir /zones

3. Next step is to define/create the zone root. This is the path to zone's root directory that is relative to the global zone's root directory. Zone root must be owned by root user with the mode 700. This will be used in setting the zonepath property, during the zone creation process.

[sun]# cd /zones[sun]# mkdir sun01[sun]# chmod -R 775 sun01[sun]# ls -l
 total 2
 drwx------   2 root     root         512 Feb 06 12:46 sun01

In a Sparse Root Zone, the directories /usr, /sbin, /lib and /platform will be mounted as loopback file systems. That is, although all those directories appear as normal directories under the sparse root zone, they will be mounted as read-only file systems. Any change to those directories in the global zone can be seen from the sparse root zone.


However if you need the ability to write into any of those directories listed above, you may need to configure a Whole Root Zone. For example, softwares like ClearCase need write permissions to /usr directory. In that case configuring a Whole Root Zone is the way to go. The steps for creating and configuring a new 'Whole Root' local zone are as follows:

4. In this step we will create & configure a new 'Sparse Root' local zone, with root privileges. For configuration of installed zone we will used "zonecfg" command. It is most widely used command for zone configuration.

[sun]# zonecfg -z sun01sun01: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:
sun01> create
zonecfg:
sun01> set zonepath=/zones/sun01
zonecfg:
sun01> set autoboot=true
zonecfg:
sun01> add net
zonecfg:
sun01:net> set physical=em0
zonecfg:
sun01:net> set address=10.135.0.24
zonecfg:
sun01:net> end
zonecfg:
sun01> add fs
zonecfg:
sun01:fs> set dir=/repo2
zonecfg:
sun01:fs> set special=/dev/dsk/c1t20d0s1
zonecfg:
sun01:fs> set raw=/dev/rdsk/c1t20d0s1
zonecfg:
sun01:fs> set type=ufs
zonecfg:
sun01:fs> set options noforcedirectio
zonecfg:
sun01:fs> end
zonecfg:
sun01> add inherit-pkg-dir
zonecfg:
sun01:inherit-pkg-dir> set dir=/opt/csw
zonecfg:
sun01:inherit-pkg-dir> end
zonecfg:
sun01> info
zonepath: /zones/
sun01
autoboot: true
pool:
inherit-pkg-dir:   dir: /lib
inherit-pkg-dir:   dir: /platform
inherit-pkg-dir:   dir: /sbin
inherit-pkg-dir:   dir: /usr
inherit-pkg-dir:   dir: /opt/csw
net: address: 
10.135.0.24
physical: 
em0
zonecfg:appserv> verify
zonecfg:appserv> commit
zonecfg:appserv> exit

4. Secondly we will create & configure a new 'Whole Root' local zone, with root privileges. In this again we will used the same zone name "sun01".

[sun]zonecfg -z sun01
sun01: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:
sun01> create
zonecfg:
sun01> set zonepath=/zones/sun01
zonecfg:
sun01> set autoboot=true
zonecfg:
sun01> add net
zonecfg:
sun01:net> set physical=em0
zonecfg:
sun01:net> set address=10.135.0.24
zonecfg:
sun01:net> end
zonecfg:
sun01> add inherit-pkg-dir
zonecfg:sun01:inherit-pkg-dir> set dir=/opt/csw
zonecfg:
sun01:inherit-pkg-dir> end
zonecfg:
sun01> remove inherit-pkg-dir dir=/usr
zonecfg:
sun01> remove inherit-pkg-dir dir=/sbin
zonecfg:
sun01> remove inherit-pkg-dir dir=/lib
zonecfg:
sun01> remove inherit-pkg-dir dir=/platform
zonecfg:
sun01> info
zonepath: /zones/
sun01
autoboot: true
pool:
inherit-pkg-dir:  dir: /opt/csw
net:  address: 
10.135.0.24
physical: 
em0
zonecfg:appserv> verify
zonecfg:appserv> commit
zonecfg:appserv> exit

Brief explanation of the properties that we added:

\* zonepath=/zones/sun01
Local zone's root directory, relative to global zone's root directory. ie., local zone will have all the bin, lib, usr, dev, net, etc, var, opt etc., directories physically under /zones/appserver directory

\* autoboot=true

boot this zone automatically when the global zone is booted

\* physical=em0

em0 card is used for the physical interface

\* address=10.135.0.24
10.135.0.24 is the IP address. It must have all necessary DNS entries

The whole add fs section adds the file system to the zone. In this example, the file system that is being exported to the zone is an existing UFS file system.

\* set dir=/repo2

/repo2 is the mount point in the local zone

\* set special=/dev/dsk/
c1t20d0s1 set raw=/dev/rdsk/c1t20d0s1

Grant access to the block (/dev/dsk/c1t20d0s1) and raw (/dev/rdsk/c1t20d0s1) devices so the file system can be mounted in the non-global zone. Make sure the block device is not mounted anywhere right before installing the non-global zone. Otherwise, the zone installation may fail with ERROR: file system check </usr/lib/fs/ufs/fsck> of </dev/rdsk/c2t40d1s6> failed: exit status <33>: run fsck manually. In that case, unmount the file system that is being exported, uninstall the partially installed zone (zoneadm -z <zone> uninstall) then install the zone from the scratch (no need to re-configure the zone, just do a re-install).

\* set type=ufs

The file system is of type UFS

\* set options noforcedirectio

Mount the file system with the option redirection.

\* dir=/opt/csw

Read-only path, will be lofs'd (loop back mounted) from global zone.

Note: it works for sparse root zone only -- whole root zone cannot have any shared file systems

Zonecfg commands verify and commit, verifies and commits the zone configuration for the zone, respectively. Note that it is not necessary to commit the zone configuration; it will be done automatically when we exit from zonecfg tool. info displays information about the current configuration

5. Now we will check the current state of the newly created/configured zone, for this we will use zoneadm command

[sun]# zoneadm list -cv
   ID NAME             STATUS         PATH
    0 global           running        /
    - sun01          configured     /zones/sun01

6. Next step is to install the configured zone "sun01". It takes a while to install the necessary packages
  
[sun]# zoneadm -z sun01 install 

The file contains a log of the zone installation. Once the zone installation is completed you can able to see the message on the installation window, all the required packages get installed during this installation.

7. Now verify the state of the sun01 zone

[sun]# zoneadm list -cv
   ID NAME             STATUS         PATH
    0 global           running        /
    - sun01          installed      /zones/sun01

8. In final step we will boot up the sun01 zone.

[sun]# zoneadm -z sun01 boot
zoneadm: zone 'sun01': WARNING: em0:1: no matching subnet found in netmasks(4) for 
10.135.0.24,using default of  255.0.0.0.

[sun]# zoneadm list -cv
   ID NAME             STATUS         PATH
    0 global           running        /
    1 sun01          running        /zones/sun01


9. Login to the Zone {console} and performing the internal zone configuration. zlogin utility can be used to login to a zone with -C option of zlogin can be used to log in to the Zone console.

[sun]# zlogin -C sun01

9.1. It is asking for some option when we run the above command. for language option you need to select the "English" option. Most probably it is "0" option in the menu.

9.2  After language option it is asking for locale. You need to set "English (c-7-bit ASCII) locale for Solaris 10 zone.

9.3  Enter the host name which identifies this system on the network.  The name must be unique within your domain; creating a duplicate host name will cause problems on the network after you install Solaris. A host name must have at least one character; it can contain letters, digits, and minus signs (-).

10. Now simply login to the newly created zone, just like connecting to any other system in the network.

Note: You can create another local zone using this same method. In this blog we get the idea how we will create the new zone on Solaris 10. The installation method on other Solaris operating system is different. This post is used for only zone creation on Solaris 10 only.