Friday, February 10, 2017

How to Create Local YUM Repository on CentOS 7 / RHEL 7

In this post, I describes how to setup a local Yum repository on CentOS 7 & RHEL 7 server operating system.

The local YUM repository is the most significant way to perform any type of package installation without any internet connection. If you have to install software, security updates and fixes often in multiple systems in your local network, then having a local repository is an efficient way. 

All required packages are downloaded over the fast LAN connection from your local server, so that it will save your Internet bandwidth and reduces your annual cost of Internet.

Now, please find the step by step method to create a YUM repository on RHEL 7 & CentOS 7.

Mount the Local Media:

In this step we mount our CentOS 7 / RHEL 7 installation DVD. For example, let us mount the installation media on /mnt directory.

#mount -o loop /dev/cdrom /mnt

In my Linux machine i will insert the ISO on cdrom and mount it on /mnt directory.

Copy or Extract the Media:

In this step, we will extract or copy iso files to local machine. For this work we will create a one mount point directory and copy all the packages files from /mnt.

#mkdir /centos
#cp -rvf /mnt/* /centos

So all the packages are copied on the "/centos" directory.

Install repository packages:

In this step, we will create repository but before this we need to install the "createrepo" rpm on the YUM server.

For createrepo, you need to install some dependency rpm which are listed in "Packages" folder.

#rpm -ivh libxml2-python-2.9.1-5.el7_0.1.x86_64.rpm
#rpm -ivh python-deltarpm-3.6-3.el7.x86_64.rpm
#rpm -ivh deltarpm-3.6-3.el7.x86_64.rpm
#rpm -ivh createrepo-0.9.9-23.el7.noarch.rpm

Once these required packages is installed we create the repo using createrepo command.

#createrepo -v /centos/Packages

Remove the Online Repository:

Please remove the old repository from /etc/yum.repos.d directory. In this directory all the default local centOS / RHEL repository are exist.

#rm -rf /etc/yum.repos.d/*

Create Local Repository:

In this step we will create the local repostiory which is always kept as a local. Using this local repo we can install all the packages and their dependency.

#vi /etc/yum.repos.d/local.repo

[Packages]
baseurl=file:///centos/Packages
gpgcheck=0
enable=1

After save the file your local.repo repository has been created. In next step we will enable it.

Enable Local Repository:

After successfully creation of YUM repository we will enable it.

#yum clean all
#yum repolist all

Using this repolist command you can check the newly created and existing repository on the server. After that you can easily install the pacakges using YUM in RHEL 7 & CentOS 7.

How to setup NFS Server on CentOS 7 / RHEL 7

In this post, I would explain you , how to setup NFS server on CentOS 7 & RHEL 7. This step by step installation and configuration method of NFS server is work in Fedora 22 version also.

Network File System is used for to share files and folders between Linux / Unix systems. NFS enables you to mount a remote share locally as well as it allows to have updated files across the share.

Before starting the setup method , we need to understand which service and files are used for NFS setup.

Please find the below services which are used for NFS setup. its must be always runs on operating system.

rpcbind service: The rpcbind server is used to converts RPC program numbers into universal addresses.

nfs-server service:  Its enables the NFS clients to access NFS shares.

nfs-lock / rpc-statd service: these are the recovery services when an NFS server crashes and reboots.

nfs-idmap service: It translates user and group ids into names, and to translate user and group names
into ids.

The mail configuration file for NFS server and client is "/etc/exports". It controls which file systems are exported to remote hosts and specifies options.

Now, we will start the step by step process for setup of NFS server on CentOS 7 / RHEL 7.

NFS Server Setup:

1. First we need to install the NFS packages on the server where we want to setup of NFS server. We can install the required NFS packages using YUM.

#yum install nfs-utils libnfsidmap

It's installed all the required packages on NFS server.

2. Once the packages are installed we will enable and start all the above services which we explain in my post.

#systemctl enable rpcbind
#systemctl enable nfs-server
#systemctl start rpcbind
#systemctl start nfs-server
#systemctl start rpc-statd
#systemctl start nfs-idmapd

You can check the status of all these service by using this command "systemctl status service_name" to ensure all are working fine.

3. Now we will created the shared directory which we want to share for client.

#mkdir /backup
#chmod -R 777 /backup

We can change the permission of NFS folder as per your requirement. In my case I'll provide the read write permession to all NFS client on this shared folder, so they can easily copy and remove the files. Ideally for security purpose we never provide 777 permission.

4. In this step we will make an entry of shared folder and client information , what permission we give to client to access the folder and which client can access the NFS shared folder.

# vi /etc/exports

/backup 10.135.0.27(rw,sync,no_root_squash)

In above command output, you can see the "/backup" is shared NFS server folder and "10.135.0.27" user client have rights to access this shared folder.

Also in brackets if you see the permission parameter which is very important when we setup the NFS setup. Please find the small idea about these permission parameteres.

rw: Writable permission to shared folder

sync:  all changes to the according filesystem are immediately flushed to disk.

no_root_squash: By default, any file request made by user root on the client machine is treated as by user nobody on the server. If no_root_squash is selected, then root on the client machine will have the same level of access to the files on the system as root on the server

5. Now, we will export the shared directories using following command.

# exportfs -r

We can use other syntax as well for this, which is listed below.

exportfs -v : Displays a list of shares files and export options on a server
exportfs -a : Exports all directories listed in /etc/exports
exportfs -u : Unexport one or more directories
exportfs -r : Reexport all directories after modifying /etc/exports

6. Now in above step 5, we configured and installed the NFS server but if firewall is running on your machine then we need to add NFS services in firewall as well.

#firewall-cmd --permanent --zone public --add-service mountd
#firewall-cmd --permanent --zone public --add-service rpc-bind
#firewall-cmd --permanent --zone public --add-service nfs
#firewall-cmd --reload

NFS Client Steup:

1. Once we installed the NFS server, now we will mount the remote file system on NFS client machine. So for this , on client machine we will install the same NFS packages which we installed during NFS server setup.

#yum install nfs-utils libnfsidmap

It's installed all the required packages on NFS client. Once the packages is installed on NFS client machine we will start the "rpcbind" services on client machine.

#systemctl enable rpcbind
#systemctl start rpcbind

2. Now we will mount the NFS shared folder on client machine but before doing that we will check on client machine which NFS server is available.


client# showmount -e 10.135.0.27    (10.135.0.2 is myserver machine IP)

Export list for 10.135.0.27:
/backup      10.135.0.2

So you can able to see on client machine our NFS shared folder is available on 10.135.0.27 NFS server. 

3. In this step , now we will mount this NFS shared folder on NFS client machine, for this we will create a mount point on client machine where we mount the server shared folder.

client# mkdir /mnt/backup
client#mount 10.135.0.27:/backup /mnt/backup

So you can check the mount folder using "df -h" command.

4. To make a permanent entry on client machine so that once you take a reboot of client machine , the shared folder is not umount.

client# vi /etc/fstab
10.135.0.27:/backup/ /mnt/backup nfs rw,sync,hard,intr 0 0

Please make an entry permanent on client machine and save it and take a restart of machine , after reboot once you login you will see the shared folder still available on the client machine.

5. For testing of shared folder, you can create a one file on client machine then check on the server this newly created file is shown on the server folder also.

So, using all these steps you can easily setup the NFS server and client on your machine.

Wednesday, February 8, 2017

How To Install VNC Server On CentOS 7 & RHEL 7

In this post, you can find the step by step method of installation and configuration of VNC server on CentOS 7 / RHEL 7. VNC server installation on CentOS 7 / RHEL 7 is quite different from older version of linux.

Before moving to installation part, first we need to know what is VNC server and how it is works on Linux environment.

VNC (Virtual Network Computing):

VNC stands for Virtual Network Computing server which allows for remote desktop connection in graphical GUI mode by using their remote client. For VNC client we can use VNC viewer. We can use other VNC client as wll for taking a remote connection of VNC server.  Some packages is required for installation and configuration which we explain during the post.

Step by Step Installation and Configuration method for VNC Server:

First we need to install the required packages on the server. You can install the packages using "YUM" command and if you have source package you can install it. On my machine I have yum repo , so we can install the rpm using YUM.

1. In my machine I am using GNOME desktop, if GNOME desktop is not installed on your machine then you need to install it by using below command.

#yum groupinstall "GNOME Desktop"

Using above command you can install all the packages which are required for desktop version. When you run the above command you can see all listed packages before installation.

2. After installation of GNOME packages we will install tigervnc-server packages which is mandatory for VNC server installation.

#yum install tigervnc-server*

here, "*" sign is used for all dependency, if you use this sign it is automatically installed all dependency. Using above command we can install all the VNC server rpm's.

3. Now we will add VNC user on the server.

#useradd vibhor

In my case I used my name but you can use any name.

4. In CentOS 7,there is change in the vncserver configuration file, becuase in older version of CentOS it was /etc/sysconfig/vncservers and now it have changed in /lib/systemd/system/vncserver@.service

#cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:2.service

5. Now we will edit the vncserver file as describe below.

#vi /etc/systemd/system/vncserver@:2.service

[...]
[Service]
Type=forking
# Clean any existing files in /tmp/.X11-unix environment
ExecStartPre=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'
#ExecStart=/sbin/runuser -l <USER> -c "/usr/bin/vncserver %i"
#PIDFile=/home/<USER>/.vnc/%H%i.pid
ExecStart=/sbin/runuser -l vibhor -c "/usr/bin/vncserver %i"
PIDFile=/home/vibhor/.vnc/%H%i.pid
ExecStop=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'

In the above file, you need to set the user name which use add on the server for access the VNC server. You can add more user on the server but for each user you need to create new service file and in each file you will change the user string.

6. If your firewall is enable on your machine then you need to add VNC server service on the firewall and make a permanent, in my machine it is enable so i will use it.

#firewall-cmd --permanent --zone=public --add-service vnc-server
#firewall-cmd --reload

7.  Now please switch the user from root and start the vnc server for vnc user.

#su - vibhor

vibhor@localhost~]$ vncserver

You will require a password to access your desktops.

Password:<--yourvncpassword
Verify:<--yourvncpassword
xauth:  file /home/vibhor/.Xauthority does not exist

New 'localhost:2 (vibhor)' desktop is localhost:2

Creating default startup script /home/vibhor/.vnc/xstartup
Starting applications specified in /home/vibhor/.vnc/xstartup
Log file is /home/vibhor/.vnc/localhost:2.log

It is creating the file in your home direcotory , if you see the service is startup and working fine.

8. Now please start the all required services for vnc server and make it permanent so once you take a reboot of machine it is not disable. All the service start and enable through root user.

#systemctl daemon-reload
#systemctl enable vncserver@:2.service
#systemctl start vncserver@:2.service

9. Now  go to your workstation or laptop and install the VNC client and connect the server using "vibhor" vnc user. when you run the client it is asking for host name so you will enter the host name and enter the vncuser password after that you can connect the server graphically.

How to increase or decrease file system size in AIX operating system.

Hello Friends,

Hope you are doing well at your end. In my previous post, I explained you step by step method of extend & reduce of file system size in HP-UX operating system. In this post, I would like to guide you , how to increase or decrease the file system size in AIX operating system.

In AIX platform, normally we are using Journaled File System (JFS) or Enhanced Journaled File system (JFS2). So before moving to main topic we need to understand, what is the advantages of JFS file system in AIX.

JFS file system has more advantages over the BSD and UFS file system which are normally used in Unix environment. The biggest disadvantages of BDS/UFS file system is "file system corruption" in case of power failure or system crash. And this corruption mostly occur during the creation or removal of files in operating system. So JFS file system is resolved such issues.

Using JFS, this problem can be reduced by use of log volume of file system. When the AIX system crash, this log file replayed and bringing back the system online. So all the data before crash written in disk.

Hope, you can understand the advantages of this file system. Now we will move main topic of this post. How we will extend and reduce the file system size. So please follow the below step by step method as describe below.

1. Before increase the size of file system , we need to check the current size of file system, for this please run the below command and check the output.
-------------------------------------------------------------------------------------------------------
aix:/> df -kP /test
Filesystem    1024-blocks      Used Available Capacity Mounted on
/dev/test   716308480 694894144  21414336      98% /test
-------------------------------------------------------------------------------------------------------
If you see the above output, the file system is almost 98% full, so we will increase the file system size to 200 GB.

2. To increase the file system size, please use the below command.
-------------------------------------------------------------------------------------------------------
aix:/>chfs -a size=+$((200000*2048)) /test
or
aix:/>chfs -a size=+200G  /test
-------------------------------------------------------------------------------------------------------
In above command output, we will increase the size of file system /test to 200 GB, so for this we will use "chfs" command. You can use two method for increase the size of file system.

If you have an error 0516-787 extendlv: Maximum allocation for logical volume oraexp is old_limit. For resolving this issue you can run the below command.
-------------------------------------------------------------------------------------------------------
aix:/>chlv -x new_limit lv_name
eg. chlv -x 4000 test
-------------------------------------------------------------------------------------------------------
Note: In my case my file system logical volume name is "test". You can replace your LV name to your LV name.

3.  To decrease the file system size in AIX operating system, please use the below command.
-------------------------------------------------------------------------------------------------------
aix:/>chfs -a size=-$((X*2048)) /fs
-------------------------------------------------------------------------------------------------------
You can replace the "X" size in Mo which you want to decrease it. Using above command you can reduce the size of file system in AIX.

Please post your query in comment if you are facing any issue during increase and reduce of file system in AIX operating system.

Monday, February 6, 2017

LVM and file system basics in HP-UX

Hello Friends,

In my previous post, you can find the simple way to extend or increase the file system size in HP-UX operating system. So now we need to understand also how these file system is created with LVM. So in my new post, i would try to explain you how we will create LVM and file system in HP-UX operating system.

In this post, I will give you an example for HP-UX 11i v2/v3 version operating system. As my current machine keeping same version OS. You can find the LVM creation for Linux in my other post on my blog.

As you know LVM stands for logical volume management system which are used for file creation. Please find the step by step method to create the LVM in HP-UX operating system as describe below.

1. First we will create the physical volume using free disk space. After that we will create the volume group. to create the physical volume we will use "pvcreate" command.
-------------------------------------------------------------------------------------------------------------
hpx:/>pvcreate -f /dev/rdisk/disk1
Physical volume "/dev/rdisk/disk1" has been successfully created.
-------------------------------------------------------------------------------------------------------------
Using "pvcreate" command new physical volume group'/dev/rdisk/disk1 has been created.

2. In second step, we will create the new volume group with the help of physical volume. but before running the creation command we need to create a VG directory in /dev location.Once we will create the directory we will change the ownership and permission of the folder respectively.
-------------------------------------------------------------------------------------------------------------
hpx:/>mkdir -p /dev/vg00    Note: (you can replace volume group name accordingly) 
hpx:/>chown -R root:root /dev/vg00
hpx:/>chmod -R 755 /dev/vg00
-------------------------------------------------------------------------------------------------------------
3. Now we will create a special group device file.In HP-UX each volume group must have a group device special file under its sub directory in /dev. This group DSF is created with the "mknod " command, like any other DSFs the group file must have a major and a minor number.

One of the most important thing while creating the LVM on the HP-UX is major & minor number. For LVM 1.0 volume groups the major number must be 64 and for the LVM 2.0 one must be 128.

For minor number, the first two digits will uniquely identify the volume group and the remaining digits must be 0000.

In our case we’re creating a 1.0 volume group on HP-UX operating system.

hpx:/>cd /dev/vg00
hpx:/dev/vg00>mknod group c 64 0x010000

Now change the ownership to root:sys and the permissions to 640.

hpx:/>chown root:sys group
hpx:/>chmod 640 group

4. Now create the volume group using below command.

hpx:/>vgcreate -s 16 vg00 /dev/disk/disk1
Volume group "/dev/vg00" has been successfully created.
Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf

In above command , we will create new volume group "vg00" with 4 MB size.

5.  Now your new volume group in HP-UX operating system has been created successfully. You can check the volume group and physical group by using below command.
-------------------------------------------------------------------------------------------------------------
hpx:/> vgdisplay -v vg00
--- Volume groups ---
VG Name                     /dev/vg00
VG Write Access         read/write     
VG Status                    available                 
Max LV                       255    
Cur LV                        0      
Open LV                     0      
Max PV                      16     
Cur PV                        2      
Act PV                        2      
Max PE per PV           6000         
VGDA                        2   
PE Size (Mbytes)       16              
Total PE                     26    
Alloc PE                    0       
Free PE                      26    
Total PVG                  0        
Total Spare PVs         0              
Total Spare PVs in use      0 

 --- Physical volumes ---
 PV Name                   /dev/disk/disk1
 PV Status                   available                
 Total PE                    13       
 Free PE                     13       
 Autoswitch                On
-------------------------------------------------------------------------------------------------------------
In above output you can see the VG Name & PV Name which we created during LVM creation. Now using these Volume group "vg00" we can able to create file system on HP-UX file system.

6. Now in final step we will create the logical volume group in HP-UX using VG.

hpx:/> lvcreate -n vg00_test -L 256 vg00

Logical volume "/dev/vg00/vg00_test_S2" has been successfully created with
character device "/dev/vg00/rvg00_test_S2".
Logical volume "/dev/vg00/vg00_test_S2" has been successfully extended.
Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf

If our example the logical volume group name is "vg00_test". If you see the syntax of above command "-n" option is used for new logical volume name and "-L" option is used for specifying the file system size in MB.

hpx:/> lvdisplay  /dev/vg00/vg00_test

Using this command you can check newly created logical volume group on HP-UX operating system.

7. For file system creation in HP-UX, we will use below method step by step.

hpx:/>newfs -F vxfs -o largefiles /dev/vg00/vg00_test
version 7 layout 393216 sectors, 393216 blocks of size 1024, log size 1024 blocks large files supported

In above syntax, we will create a new file system, now we will create the mount point where we mount this file system
-------------------------------------------------------------------------------------------------------------
hpx:/>mkdir /test
hpx:/>mount /dev/vg00/vg00_test /test.
-------------------------------------------------------------------------------------------------------------
Using above method /test file system has been created successfully and if you want to check the current size of file system you can use "bdf  /dev/vg00/vg00_test" command.

Hope, this post is useful for you. If you have any questions related this post please comment on it. I will try to provide the simple and step by step solution to you.

How to increase/extend file system size in HP-UX 11i

Hello Friends,

In this post, I will describe how to extend or increase the file system size on HP-UX 11i operating system. Before extending the file system size we need to understand first file system type, which type of file system currently used in HP-UX operating environment.

For example, in my HP-UX operating system, I want to increase the root partition size, so for this first I need to check which file system type is it. So for checking this we will used "fstyp" command.

-------------------------------------------------------------------------------------------------------------------
hpx:/> fstyp /dev/vg00/lvol3
vxfs
-------------------------------------------------------------------------------------------------------------------

In above command output, my logical volume for root /dev/vg00/lvol3 is mounted on '/". So for root file system the type os "vxfs".

The extendfs command should be used to extend JFS (VxFS) file systems that are not mounted. The Ceritas online JFS extends a file system in using the fsadm command.So basically we can extend file system through two command.

1. Increase/Extend file system size using "fsadm".
2. Increase/Extend file system size using "extendfs".

Increase/Extend file system size using "fsadm":-

1. Before extend the file system size we need to verify that online JFS license is installed on the system.

For HP-UX 11i v1 operating system:
-------------------------------------------------------------------------------------------------------------------
hpx:/> vxlicense -t HP_OnlineJFS
vrts:vxlicense: INFO: Feature name: HP_OnlineJFS [50]
vrts:vxlicense: INFO: Number of licenses: 1 (non-floating)
vrts:vxlicense: INFO: Expiration date: No expiration date
vrts:vxlicense: INFO: Release Level: 22
vrts:vxlicense: INFO: Machine Class: All
vrts:vxlicense: INFO: Site ID: 0

For HP-UX 11i  v2|v3 operating system:
-------------------------------------------------------------------------------------------------------------------
hpx:/> vxlicrep | grep Online
HP_OnlineJFS                        = Enabled
-------------------------------------------------------------------------------------------------------------------
In above command output you can verify that onlineJFS license is installed on the system. Both the command we can use HP-UX version wise.

2. To extend the file system size , we will check the current size of file system. In this post as you know we will extend the "root" file system size, so for check the current size status we used "bdf" command. Please find the "bdf" command output as listed below.
-------------------------------------------------------------------------------------------------------------------
hpx:/> bdf
Filesystem          kbytes    used   avail %used Mounted on
/dev/vg00/lvol3    1835008  17942 1703507   1% /
-------------------------------------------------------------------------------------------------------------------
To check the current voulme group size for this file system please run the below command.

hpx:/> vgdisplay -v vg00
LV Name                     /dev/vg00/lvol3
LV Status                     available/syncd
LV Size (Mbytes)        1792
Current LE                  224
Allocated PE               224
Used PV                      1

Now, we will extend the file system size using the below command.
-------------------------------------------------------------------------------------------------------------------
hpx:/>lvextend -l 300 /dev/vg00/lvol3
Logical volume "/dev/vg00/lvol3" has been successfully extended.
Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf
-------------------------------------------------------------------------------------------------------------------
Volume group is now extend, you can see the new volume group status using the below command.

hpx:/> vgdisplay -v vg00
LV Name                     /dev/vg00/lvol3
LV Status                     available/syncd
LV Size (Mbytes)        2400
Current LE                  300
Allocated PE               300
Used PV                      1

hpx:/>fsadm -F vxfs -b 2400M /
UX:vxfs fsadm: INFO: V-3-25942: /dev/vg00/lvol3 size increased from 1835008 sectors to 2457600 sectors.

3. Now you file system "root" size has been extend , to need to check new extended size please run the below command.
-------------------------------------------------------------------------------------------------------------------
hpx:/> bdf
Filesystem          kbytes    used   avail %used Mounted on
/dev/vg00/lvol3    24576008  18095 2287043   1% / 
-------------------------------------------------------------------------------------------------------------------
If you see in above bdf command output the available size for "root" partition size is extend.


Increase/Extend file system size using "extendfs":-

1. To extend the file system size , we will check the current size of file system. In this post as you know we will extend the "root" file system size, so for check the current size status we used "bdf" command. Please find the "bdf" command output as listed below.
-------------------------------------------------------------------------------------------------------------------
hpx:/> bdf
Filesystem          kbytes    used   avail %used Mounted on
/dev/vg00/lvol3    1835008  17942 1703507   1% /
-------------------------------------------------------------------------------------------------------------------
To check the current voulme group size for this file system please run the below command.

hpx:/> vgdisplay -v vg00
LV Name                     /dev/vg00/lvol3
LV Status                     available/syncd
LV Size (Mbytes)        1792
Current LE                  224
Allocated PE               224
Used PV                      1

Now, we will extend the file system size using the below command. The procedure is same as previous one.

2. For extending the file system size using "extendfs" we need to umount the "root" file system then extend the size and re-again mount it. 

hpx:/>lvextend -l 300 /dev/vg00/lvol3
Logical volume "/dev/vg00/lvol3" has been successfully extended.
Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf
-------------------------------------------------------------------------------------------------------------------
hpx:/>umount /
hpx:/>extendfs /dev/vg00/lvol3
hpx:/>mount /
-------------------------------------------------------------------------------------------------------------------
Once you mount the "root" file system again then check the newly size of file system, for this we will use same command bdf.
-------------------------------------------------------------------------------------------------------------------
hpx:/> bdf
Filesystem          kbytes    used   avail %used Mounted on
/dev/vg00/lvol3    24576008  18095 2287043   1% / 
-------------------------------------------------------------------------------------------------------------------

So, now we are able to extend or increase the file system size using two way on HP-UX operating system. Hope, you like my post, please let me know if you facing issue during the extend file system size on HP-UX operating system.

How to create local zone in Solaris 10

Hello Friends,

In this post, I will explain you how to create a new local zone on Sun Solaris operating system. Before describe the step by step installation and configuration method we need to understand what is zone and where it is used.

Basically Zone is a virtual operating system environment created within a single instance of the Solaris operating system. The mail goal of this technology is efficient resource utilization. We can create multiple zone on one Solaris operating system.
Solaris 10's zone partitioning technology can be used to create local zones that behave like virtual servers. All local zones are controlled from the system's global zone. Processes running in a zone are completely isolated from the rest of the system.


Note- That processes running in a local zone can be monitored from global zone but the processes running in a global zone or even in another local zone cannot be monitored from a local zone.

Global Zone: When we install the Solaris 10 operating system, a global zone gets installed automatically, and the core operating system runs under global zone as well as all the local zone are also runs on same global zone. Using "zoneadm" command we can check list of all configured zones which are running on Solaris operating system.


# zoneadm list -v

  ID NAME             STATUS         PATH
   0 global           running        /

Step by step method to create a Local Zone:

When we create a local Solaris zone on global zone , we have to complete some prerequisite before installing the zone.

Prerequisites:  A lot of disk space is required to installed the newly zone. It needs at least 3 GB space to copy the essentials files to the local zone. In my case I normally use 10 GB free disk space to installed the local zone. Also for configuration we required a dedicated IP for network connectivity.

1. First we will check the disk space and network configuration by using running below command.

[sun]# df -h /
 Filesystem             size   used  avail capacity  Mounted on
 /dev/dsk/c1t1d0s0       50G    22G   28G    46%    /

[sun] # ifconfig -a
 lo0: flags=2001000849 mtu 8232 index 1   
 inet 127.0.0.1 netmask ff000000  
 em0: flags=1000843 mtu 1500 index 2   
 inet 10.135.0.23 netmask fffffe00 broadcast 10.135.0.255

Here, if you see the "df -h" command output we can found that the disk "c1t1d0s0" is mounted on the root file system. Currently the total disk space size of root partition is approx 50 GB, as we required 10 GB free space for installation and configuration of local zone, so free space on root partition is sufficient for zone installation.

In "ifconfig" command output we can able to see the the ip address of global zone.

2. As we have sufficient space on the server so we can go ahead for local zone installation. First we need to create a directory where we want to install the zone. All the files is keep in this folder only.

[sun]# mkdir /zones

3. Next step is to define/create the zone root. This is the path to zone's root directory that is relative to the global zone's root directory. Zone root must be owned by root user with the mode 700. This will be used in setting the zonepath property, during the zone creation process.

[sun]# cd /zones[sun]# mkdir sun01[sun]# chmod -R 775 sun01[sun]# ls -l
 total 2
 drwx------   2 root     root         512 Feb 06 12:46 sun01

In a Sparse Root Zone, the directories /usr, /sbin, /lib and /platform will be mounted as loopback file systems. That is, although all those directories appear as normal directories under the sparse root zone, they will be mounted as read-only file systems. Any change to those directories in the global zone can be seen from the sparse root zone.


However if you need the ability to write into any of those directories listed above, you may need to configure a Whole Root Zone. For example, softwares like ClearCase need write permissions to /usr directory. In that case configuring a Whole Root Zone is the way to go. The steps for creating and configuring a new 'Whole Root' local zone are as follows:

4. In this step we will create & configure a new 'Sparse Root' local zone, with root privileges. For configuration of installed zone we will used "zonecfg" command. It is most widely used command for zone configuration.

[sun]# zonecfg -z sun01sun01: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:
sun01> create
zonecfg:
sun01> set zonepath=/zones/sun01
zonecfg:
sun01> set autoboot=true
zonecfg:
sun01> add net
zonecfg:
sun01:net> set physical=em0
zonecfg:
sun01:net> set address=10.135.0.24
zonecfg:
sun01:net> end
zonecfg:
sun01> add fs
zonecfg:
sun01:fs> set dir=/repo2
zonecfg:
sun01:fs> set special=/dev/dsk/c1t20d0s1
zonecfg:
sun01:fs> set raw=/dev/rdsk/c1t20d0s1
zonecfg:
sun01:fs> set type=ufs
zonecfg:
sun01:fs> set options noforcedirectio
zonecfg:
sun01:fs> end
zonecfg:
sun01> add inherit-pkg-dir
zonecfg:
sun01:inherit-pkg-dir> set dir=/opt/csw
zonecfg:
sun01:inherit-pkg-dir> end
zonecfg:
sun01> info
zonepath: /zones/
sun01
autoboot: true
pool:
inherit-pkg-dir:   dir: /lib
inherit-pkg-dir:   dir: /platform
inherit-pkg-dir:   dir: /sbin
inherit-pkg-dir:   dir: /usr
inherit-pkg-dir:   dir: /opt/csw
net: address: 
10.135.0.24
physical: 
em0
zonecfg:appserv> verify
zonecfg:appserv> commit
zonecfg:appserv> exit

4. Secondly we will create & configure a new 'Whole Root' local zone, with root privileges. In this again we will used the same zone name "sun01".

[sun]zonecfg -z sun01
sun01: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:
sun01> create
zonecfg:
sun01> set zonepath=/zones/sun01
zonecfg:
sun01> set autoboot=true
zonecfg:
sun01> add net
zonecfg:
sun01:net> set physical=em0
zonecfg:
sun01:net> set address=10.135.0.24
zonecfg:
sun01:net> end
zonecfg:
sun01> add inherit-pkg-dir
zonecfg:sun01:inherit-pkg-dir> set dir=/opt/csw
zonecfg:
sun01:inherit-pkg-dir> end
zonecfg:
sun01> remove inherit-pkg-dir dir=/usr
zonecfg:
sun01> remove inherit-pkg-dir dir=/sbin
zonecfg:
sun01> remove inherit-pkg-dir dir=/lib
zonecfg:
sun01> remove inherit-pkg-dir dir=/platform
zonecfg:
sun01> info
zonepath: /zones/
sun01
autoboot: true
pool:
inherit-pkg-dir:  dir: /opt/csw
net:  address: 
10.135.0.24
physical: 
em0
zonecfg:appserv> verify
zonecfg:appserv> commit
zonecfg:appserv> exit

Brief explanation of the properties that we added:

\* zonepath=/zones/sun01
Local zone's root directory, relative to global zone's root directory. ie., local zone will have all the bin, lib, usr, dev, net, etc, var, opt etc., directories physically under /zones/appserver directory

\* autoboot=true

boot this zone automatically when the global zone is booted

\* physical=em0

em0 card is used for the physical interface

\* address=10.135.0.24
10.135.0.24 is the IP address. It must have all necessary DNS entries

The whole add fs section adds the file system to the zone. In this example, the file system that is being exported to the zone is an existing UFS file system.

\* set dir=/repo2

/repo2 is the mount point in the local zone

\* set special=/dev/dsk/
c1t20d0s1 set raw=/dev/rdsk/c1t20d0s1

Grant access to the block (/dev/dsk/c1t20d0s1) and raw (/dev/rdsk/c1t20d0s1) devices so the file system can be mounted in the non-global zone. Make sure the block device is not mounted anywhere right before installing the non-global zone. Otherwise, the zone installation may fail with ERROR: file system check </usr/lib/fs/ufs/fsck> of </dev/rdsk/c2t40d1s6> failed: exit status <33>: run fsck manually. In that case, unmount the file system that is being exported, uninstall the partially installed zone (zoneadm -z <zone> uninstall) then install the zone from the scratch (no need to re-configure the zone, just do a re-install).

\* set type=ufs

The file system is of type UFS

\* set options noforcedirectio

Mount the file system with the option redirection.

\* dir=/opt/csw

Read-only path, will be lofs'd (loop back mounted) from global zone.

Note: it works for sparse root zone only -- whole root zone cannot have any shared file systems

Zonecfg commands verify and commit, verifies and commits the zone configuration for the zone, respectively. Note that it is not necessary to commit the zone configuration; it will be done automatically when we exit from zonecfg tool. info displays information about the current configuration

5. Now we will check the current state of the newly created/configured zone, for this we will use zoneadm command

[sun]# zoneadm list -cv
   ID NAME             STATUS         PATH
    0 global           running        /
    - sun01          configured     /zones/sun01

6. Next step is to install the configured zone "sun01". It takes a while to install the necessary packages
  
[sun]# zoneadm -z sun01 install 

The file contains a log of the zone installation. Once the zone installation is completed you can able to see the message on the installation window, all the required packages get installed during this installation.

7. Now verify the state of the sun01 zone

[sun]# zoneadm list -cv
   ID NAME             STATUS         PATH
    0 global           running        /
    - sun01          installed      /zones/sun01

8. In final step we will boot up the sun01 zone.

[sun]# zoneadm -z sun01 boot
zoneadm: zone 'sun01': WARNING: em0:1: no matching subnet found in netmasks(4) for 
10.135.0.24,using default of  255.0.0.0.

[sun]# zoneadm list -cv
   ID NAME             STATUS         PATH
    0 global           running        /
    1 sun01          running        /zones/sun01


9. Login to the Zone {console} and performing the internal zone configuration. zlogin utility can be used to login to a zone with -C option of zlogin can be used to log in to the Zone console.

[sun]# zlogin -C sun01

9.1. It is asking for some option when we run the above command. for language option you need to select the "English" option. Most probably it is "0" option in the menu.

9.2  After language option it is asking for locale. You need to set "English (c-7-bit ASCII) locale for Solaris 10 zone.

9.3  Enter the host name which identifies this system on the network.  The name must be unique within your domain; creating a duplicate host name will cause problems on the network after you install Solaris. A host name must have at least one character; it can contain letters, digits, and minus signs (-).

10. Now simply login to the newly created zone, just like connecting to any other system in the network.

Note: You can create another local zone using this same method. In this blog we get the idea how we will create the new zone on Solaris 10. The installation method on other Solaris operating system is different. This post is used for only zone creation on Solaris 10 only.