Thursday, March 9, 2017

How to enable SAR (System Activity Reporter) on Solaris Server

In this post, You can find the information one of the most important monitoring tool application on Solaris operating system. SAR (System Activity Reporter) is used to troubleshoot the performance issue on Sun Solaris Servers.

Using SAR (System Activity Reporter) we can troubleshoot or monitored the disk, memory or cpu performance issues on the Solaris operating system servers.

It is widely used performance tools for monitoring purpose but this utility also have some disadvantages. SAR utility consume lot of disk space when it is generated the report as well as /var file system space get increase rapidly.

Now in the below post we will step by step method to enable the SAR on Solaris Operating system.

Step by step procedure to enable SAR (System Activity Reporter):

1. In the first step we will check the current service status of SAR. To check this thing we will used below command which is mention below.

sun#svcs status sar
disabled        Mar_9  svc:/system/sar:default
or
sun#svcs -a | grep -i sar
disabled        Mar_9  svc:/system/sar:default

If you see the current status of SAR service it is disable. You can use both the above syntax to find out the current service status.

2.  As you seen in above step, the SAR service is disable on the Sun Solaris system so in this step we will enable it.

sun#svcadm enable svc:/system/sar:default

Check the status of service again as per below command.

sun# svcs status svc:/system/sar:default
enabled        Mar_9  svc:/system/sar:default

3.  Now in this step, we will make a setup for automatic data collection. Normally once we enable the services of SAR, the default script for SAR utility are located the below directory location.

/usr/lib/sa/sa1: This is a shell script to collect and store data in the binary file /var/adm/sa/sadd, where dd is the current day.

/usr/lib/sa/sa2: This is a  shell script for generating daily report in the file /var/adm/sa/sardd, where dd is current day.

As these above script are used normally to collect the automatically data from Solaris Server. If you required the daily report or weekly report then you need to add both the script in crontab file which is describe in next step.

4.  If you required the SAR report regularly then you need to make an entry of above script on the crontab file.

#crontab -e

Using these command you can edit the existing file and make an entry of above script according to your requirement when you want to generate the report.

Please comment on the post, if you have any issue related to this SAR post.

Wednesday, March 8, 2017

How to run Oracle Explorer on Sun Solaris 11

In this post I will explain how to run Oracle Explorer file on Solaris 11 operating system. Explorer files are used as a snapshot when we need to check all hardware issues or internal issue. The explorer files keeps all these information.

Oracle SUN Solaris explorer is a collection of scripts and binary executable files which collect all information and creates a detailed snapshot of Oracle Sun Solaris system configurations.

Oracle Sun Solaris Explorer is always installed on Global zone using root user which are running only Sun Sparc system and Solaris X86 systems as well as we are gathers information related to drivers, patches, recent system event history, and log file entries from the Oracle Explorer Data Collector output.

Before moving to run the explorer files we need to understand which packages is required to installed and configure the explorer files.

1. First we need to download the Service tools bundle from any ftp servers and extract them and run the script with extension syntax.

# ./install_stb.sh -ext

2. In this step we will uncompressed and untar the Explorer tar file using below command.

# cd /var/tmp/stb/extract/Explorer
# uncompress Explorer.tar.Z
# tar xvf Explorer.tar

3. In this step we will install Explorer packages and create directories "SUNWexplo" and "SUNWexplu" to install the required packages.

# pkgadd -d . SUNWexplo SUNWexpl

4. Now we will run the explorer command to obtained the logs files from the Sun Solaris system.

  #explorer
  
Normally on Solaris server the default location to run the above command is /usr/sbin/explorer which create & send the explorer log file.

If you want the create default configuration file only first time please use the below syntax.

  # explorer -g 

If you want to check explorer version please run the below command.

  # explorer -V
        
Normally in most of the Solaris server the default path of the explorer output is /var/explorer/output but it depends where you installed the Explorer.

Monday, March 6, 2017

How to configure YUM Server in Red hat linux 6

In this post, We will get know how to install or configure the Yum server on Red Hat Linux 6 operating system.

As you know their are several ways using which we can install the packages or rpm on the server. Please find the below step by step method to install the packages using YUM.

1. In the initial step please install the cd in cdrom and open a terminal for mount the cd in /mnt directory.

#mount –o loop /dev/cdrom /mnt

Here, we mount the CD of ISO in /mnt direcoty. You must enough space on /mnt folder before mounting this ISO.

2. Now in second step we will create a directory where we make a YUM repository.

#mkdir /rhel6

In this directory we will install all the YUM configuration files.

3. Now we will copy all the files from mount folder to newly created folder.

#cp –rvf /mn/* /rhel6

4. In this step, we will install the required packages which is required for create a repo on the server.

#cd /rhel6/Packages 
#rpm –ivh python*
#rpm –ivh createrepo*

These above packages is required for repostiory creation in YUM server.

5. Now move back again repodata directory and create a new repo for repository installation.

#createrepo  –v /rhel6/Packages

This is install all the packages on /rhel6 directory which is used for YUM configuration.

6. In this step we will create a new repo files in this directory.

#cd /etc/yum.repos.d/
#rm –rf *
#vi ss.repo

[Packages]
baseurl=file:///rhel6/Packages 
gpgcheck=0
enable=1

In this step if any exsiting repo if you found please remove it and create a new repo.

7. In the final step we will clean the repo and check the list of all rpm's using below commands.

#yum clean all
#yum list

Hope, using this post, you can easily install and configured the packages on the Red Hat Operating system. Please comment on the post if you encouter any issue.

Friday, March 3, 2017

NFS mount on Solaris 11 Non-Global zones

In this post, we would learn how we mount one folder from one Non-Global zone to another zone on Solaris 11 operating system.

As you know, in linux server it is less difficult in comparision to Solaris server. Here, I will take a two local zone "sun01" & "sun02". Let's take an example, we will mount one folder named "/export/backup" from "sun01" local zone to another zone "sun02" on "/project/export/data" location.

Step by Step method of NFS mount on Solaris 11:

1. In the first step we will create the directory on "sun02" zone where we want to mount the folder. 

sun02#mkdir /project/export/data

2. In second step, we will make a configuration for this process. So for this work you need to login on global zone with root access and make an entry on the dfstb configuration file.

sun#vi /etc/dfs/dfstab

share -F nfs -o rw=sun02 /zones/sun01/root/export

If you see the above entry, we have provided the read/write access to directory on sun02 server where we mount the folder from sun01 local zone.

3. In next step you need to login on sun02 server and mount the shared folder using below command.

sun02#mount sun:/zones/sun01/root/export/backup /project/export/data

4. Once you run the above command the folder is mount from one local zone to another zone temprarily. You can go to the directory and verify that the data which is listed on /export/backup folder is show on sun02 directory.

5. In the last step you need to restart the NFS service on the global zone so the configuration files and other changes makes affect. But these configuration are available until we are not taking reboot of the zone.

Please comment on the post, if you have any issue regarding the NFS mount sharing process. I will try to resolve such issue as soon as possible.

Thursday, March 2, 2017

System dump for HP-UX v11.x

In this post, I will cover one of the most entertaining and important topic  how we take system dump for HP-UX v11.x version operating system.

let's starting with V11.00, the following things have changed with respect to system dumps.

1. A dump does not necessarily contain all the memory pages.

2. Save core was replaced by 2 different commands:
  • Save crash for boot time dump image management.
  • Crashutil for port boot dump analysis.
3. Dump area can now be configured both in the kernel, or in the /etc/fstab file. Also, those dump areas don't need to be in the vg00 volume group.

Now we understand what is the role of Dump size.

Dump size:

Default dump size is quite small (not the full memory), but it should be ok for most usage. It can be modified if required using the crashconf utility.

Note: In crashconf , all sizes are in physical pages (4Kb on PA-RISC).

Now we will go for configuration part which is describe below.

Configuration:

Dump area/logical volumes must be created contiguous and with bad blocks relocation disabled. Use the following lvcreate options if creating manually (or use SAM ).

hpx:/># lvcreate -r n -C y ...

Configuring in the kernel:-

lvlnboot can be used to configure dump areas in the vg00 volume group, as long as you have a dump lvol line in the /stand/system file.

Run time configuration:-

Add lines in /etc/fstab with the following format:

device  /  dump  defaults 0 0

Using swap space as dump area:

Swap spaces can still be used as dump areas. In that case, both usage should be explicitly declared:

  • use lvlnboot twice ( - +-s +- and - +-v +- options) 
  • or put 2 lines in /etc/fstab (one for swap and one for dump type
When the system disk is mirrored, the swap LV is actually mirrored, whereas the same device used for dumping will only write to one disk.


Dump areas usage order:-

Dump areas are used in reverse order of their declaration, so it's important to declare last the dump areas not used for swap if you have some (this speeds up reboot, since swap can be activated before saving the dump image). 

Also, if your run time configuration does not include any dump area in the vg00 volume group, the kernel will first automatically use the primary swap space as dump device. This might slow down the reboot since the dump must be completely saved before reactivating this primary swap. To avoid that
  • declare non-swap dump partitions in /etc/fstab 
  • add a dump none line in /stand/system to explicitly use run-time configuration only 
Please comment on the post if you have any query regarding the system dump on HP-UX operating system.

Sunday, February 26, 2017

zone: error: net0: failed to create VNIC: operation not supported

In this post, I will discuss with you one of the most interesting error which I am facing when I boot the local zone on Solaris 11.3. The description of this interesting issue as describe below.

Description of error:

sun# zoneadm -z sun01 boot

zone 'sun01': error: net0: failed to create VNIC: operation not supported

zoneadm: zone sun01: call to zoneadmd(1M) failed: zoneadmd(1M

I have try to create and configure the  VNIC on Solaris 11.3 operating server but it get failed with the same error.

sun#dladm create-vnic -l net0 vnic01

dladm: vnic creation failed: operation not supported

If you are also facing a such error while booting the local zone on solaris 11 server, then please use the below solution to resolve such issues.

Solution of error:

1. This error "failed to create VNIC: operation not supported" would normally come when there are not enough mac addresses to assign to the zone. So now we need to add alternate mac addresses to the network interface.So before adding the new mac address we will stop LDM.

sun#ldm list-domain

NAME            STATE     FLAGS  CONS   VCPU MEMORY  UTIL NORM UPTIME

primary         active    -n-cv- UART   8    8G      2.0% 2.0% 41d 20h 14m

0004fb0000060000ff1d3d8336112f6f active    -n---- 5001   50   64G     0.1% 0.1% 18h 23m

2. Now log in to the Solaris global zone and check if net0 have additional MAC addresses or not. Please use the below command to check the status.

sun# dladm show-phys -m

LINK               SLOT    ADDRESS           INUSE CLIENT

net0               primary 0:21:f6:d6:d3:e5  yes  net0

                   1       0:14:4f:f9:6d:8d  no   --

                   2       0:14:4f:fb:10:2b  no   --

                   3       0:14:4f:f9:41:d6  no   --

                   4       0:14:4f:f8:dd:c8  no   --

net1               primary 0:21:f6:51:be:4d  yes  net1

3. Now zone will start without any issue, as we have assigned the new mac address to this zone.

sun# zoneadm -z sun01 start

Hope, your issue related to this has been resolved after reading my post. Please let me know if you are facing any issue regarding this error.

How to create whole root zone on Solaris 11

In this post we will see how to install  zone on Solaris 11. In my last post regarding the zone creation you see the zone creation steps about Solaris 10.

Before going to main installation part we need to understand the basic installation methods difference between Solaris 10 & Solaris 11. 

In Solaris 10, we can’t install it without configuring repo on Solaris while in Solaris 11 ,first we need to create a  Solaris 11 repo , then we can install Solaris local zone. In Solaris 11, all the local zones by default uses exclusive ip address. You can’t set the IP address while configuring the zone. After the installation of zone, you can able to configure the IP from the local zone itself.

On my Solaris 11 machine I have already installed one local zone which is used for my R&D work so for this post work I need to install second local zone. So I will clone it from the first local zone.

Step by step method to create a zone on Solaris 11:-

1. In the first step we will create a new local zone. For the zone creation we will used "zonecfg" command and configure the zones as whole root zone without any options supplied.

#zonecfg -z sun02
sun02: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:sun> create
zonecfg:sun> info
zonename: sun02
zonepath:
brand: native
autoboot: false
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
hostid:
zonecfg:sun> set zonepath=/zones/sun02
zonecfg:sun > add anet
zonecfg:sun:anet> set linkname=net0
zonecfg:sun:anet> end
zonecfg:sun > verfiy
zonecfg:sun > commit
zonecfg:sun > exit

In the above step, we will create a zne and set the zonepath. In my case my new solaris zone path is "/zones/sun02" in your case you can change your installation zone path. Here, I assign the network interface "net0" for this newly zone.

2. After successfully zone creation in above step 1, we will go for start the Solaris zone installation in this step. As I explained you for solaris 11 local zone installation we required solaris 11 repo which is used for installation part.

sun#zoneadm -z sun02 install
The following ZFS file system(s) have been created:
    rpool/zones/sun02
Progress being logged to /var/log/zones/zoneadm.30220110Z233232Z.sun02.install
       Image: Preparing at /zones/sun02/root.
 AI Manifest: /tmp/manifest.xml.F_ayqq
  SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
    Zonename: sun02
Installation: Starting ...
              Creating IPS image
Startup linked: 1/1 done
        Installing packages from:
solaris  origin:  http://localhost:1008/solaris/ce43f14c4791b5320596e2023cde1ec08709a3af/

DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                            183/183   33556/33556  222.2/222.2  139k/s

PHASE                                          ITEMS
Installing new actions                   46825/46825
Updating package state database               Done
Updating image state                          Done
Creating fast lookup database                 Done
Installation: Succeeded

Note:Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 2392.837 seconds.

Now our newly zone "sun02" has been installed successfully. All the configuration files are keep in /zones/sun02/root directory.

3. In this step we will boot the newly installed zone "sun02".

sun#zoneadm -z sun02 boot

You can check the status of zone that it is running or not after boot.

sun#zoneadm list -icv
ID NAME      STATUS     PATH          BRAND    IP
   0 global   running    /            solaris  shared
   1 sun01    running    /zones/sun01 solaris  excl
   3 sun02    running    /zones/sun02 solaris  excl

if you see the above output newly zone "sun02" is running working fine on the solaris 11 operating system.

4. Now in the next step, we will login on the local zone console to complete the configuration process.

sun# zlogin -C sun02
[Connected to zone 'sun02' console]

You can press enter when you will get this message. Now it is asking for some configuration step we need to give a details one by one.

Time Zone: Regions
select the region that contains your time zone.
Regions
UTC/GMT
Africa
Americas
Antarctica
Arctic Ocean
Asia
Atlantic Ocean
Australia
Europe
Indian Ocean
Pacific Ocean
F2_Continue  F3_Back  F6_Help  F9_Quit

Time Zone: Locations
Select the location that contains your time zone.
Locations
x Afghanistan
x Armenia
x Azerbaijan
x Bahrain
x Bangladesh
x Bhutan
x Brunei
x Cambodia
x China
x Cyprus
x East Timor
x Georgia
x Hong Kong
v India
F2_Continue  F3_Back  F6_Help  F9_Quit

Time Zone
Select your time zone.
Time Zones
Asia/Kolkata

F2_Continue  F3_Back  F6_Help  F9_Quit

System Configuration Summary
Review the settings below before continuing. Go back (F3) to make changes.

Time Zone: Asia/Kolkata
Language: *The following can be changed when logging in.
Default language: C/POSIX
Terminal type: vt100

Users:
No user account

Network:
Computer name: sun02
Network Configuration: Automatic

Support configuration:
Not generating a Support profile as OCM and ASR services are not installed.
Hostname: sun02

So now your zone is fully configured ans installed successfully. You can login in zone very easily. So in next step we will see the post configuration settings which is required on local solaris zone.

5. In the final step you need login to local zone sun02 and configured the ip address on the zone sun02.

sun#zlogin -z sun02

You can successfully login in your newly created solaris 11 zone. Please leave a comment if you have any doubt ,i will get back to you as soon as possible.