Total visit on this blog

Sunday 5 August 2012

top – Process Activity Command

The top command is useful for monitoring a Linux or Unix system continuously for processes that take more system resources like the CPU time and the memory. top periodically updates the display showing the high resource consuming processes at the top. top is an excellent aid in checking a system. If your Linux or Unix system is giving a slow response time, just run top and look for statistics like – load average, CPU utilization, memory and swap usage and the top CPU and memory intensive processes. The chances are that you will get a fair idea of what is happening in the system. The top command can be started simply by giving the command.
top
and the output is a screen, like

Understand Top command output:
Line1: Gives System present time, up time of the machine, number of users logged in, Load average on system at 5, 10, 15 min interval.
Line2: Gives total number of process on the machine, number of running process, number of sleeping process, number of stopped process, number of Zambie process.
Line3: Gives you CPU details
Line4 &  5: Gives RAM and SWAP details.
Line6: To execute top command shortcuts(See below for the list of top command shortcuts ).
From Line7: dynamically displayed top process results.
 top commands shortcuts:
Note: Press below shortcuts at the time of running top command.
l --To display or to hide load average line

t --To display or to hide task/cpu line

1 --To display or hide all other CPU's

m --to display or to hide RAM and SWAP details

s --To change the time interval for updating top results(value is in sec's)

R --To sort by PID number

u -- Press u then username to get only that user process details

P --To sort by CPU utilization 

M --To sort by RAM utilization 

c --To display or hide command full path

r --To renice a process, press r then the PID no then the renice value to renice a process.

k --To kill a process, press k then PID number then enter to kill a process

w --To save the modified configuration permanently.

q --To quit the top command. 
h --for getting help on top command

How to understand the output of TOP command:
1. Show Processes Sorted by any Top Output Column – Press O
By default top command displays the processes in the order of CPU usage. When the top command is running, press M (upper-case) to display processes sorted by memory. To sort top output by any column, Press O (upper-case O) , which will display all the possible columns that you can sort by as shown below.
Current Sort Field: P for window 1:Def
Select sort field via field letter, type any other key to return
a: PID = Process Id v: nDRT = Dirty Pages count
d: UID = User Id y: WCHAN = Sleeping in Function
e: USER = User Name z: Flags = Task Flags
When the linux top command is running, Press R, which does the sort in reverse order.

2. Kill a Task Without Exiting From Top – Press k
Once you’ve located a process that needs to be killed, press ‘k’ which will ask for the process id, and signal to send. If you have the privilege to kill that particular PID, it will get killed successfully.
PID to kill: 1309
Kill PID 1309 with signal [15]:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1309 geek 23 0 2483m 1.7g 27m S 0 21.8 45:31.32 gagent
1882 geek 25 0 2485m 1.7g 26m S 0 21.7 22:38.97 gagent
5136 root 16 0 38040 14m 9836 S 0 0.2 0:00.39 nautilus

3. Renice a Unix Process Without Exiting From Top – Press r
Press r, if you want to just change the priority of the process (and not kill the process). This will ask PID for renice, enter the PID and priority.
PID to renice: 1309
Renice PID 1309 to value:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1309 geek 23 0 2483m 1.7g 27m S 0 21.8 45:31.32 gagent
1882 geek 25 0 2485m 1.7g 26m S 0 21.7 22:38.97 gagent

4. Display Selected User in Top Output Using top -u
Use top -u to display a specific user processes only in the top command output.
$ top -u geek
While unix top command is running, press u which will ask for username as shown below.
Which user (blank for all): geek
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1309 geek 23 0 2483m 1.7g 27m S 0 21.8 45:31.32 gagent
1882 geek 25 0 2485m 1.7g 26m S 0 21.7 22:38.97 gagent
Display Only Specific Process with Given PIDs Using top -p
Use top -p as shown below to display specific PIDs.
$ top -p 1309, 1882
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1309 geek 23 0 2483m 1.7g 27m S 0 21.8 45:31.32 gagent
1882 geek 25 0 2485m 1.7g 26m S 0 21.7 22:38.97 gagent

5. Display All CPUs / Cores in the Top Output – Press 1 (one)
Top output by default shows CPU line for all the CPUs combined together as shown below.
top – 20:10:39 up 40 days, 23:02, 1 user, load average: 4.97, 2.01, 1.25
Tasks: 310 total, 1 running, 309 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.5%us, 0.7%sy, 0.0%ni, 92.3%id, 6.4%wa, 0.0%hi, 0.0%si, 0.0%st
Press 1 (one), when the top command is running, which will break the CPU down and show details for all the individual CPUs running on the system as shown below.
top – 20:10:07 up 40 days, 23:03, 1 user, load average: 5.32, 2.38, 1.39
Tasks: 341 total, 3 running, 337 sleeping, 0 stopped, 1 zombie
Cpu0 : 7.7%us, 1.7%sy, 0.0%ni, 79.5%id, 11.1%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 0.3%us, 0.0%sy, 0.0%ni, 94.9%id, 4.7%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu2 : 3.3%us, 0.7%sy, 0.0%ni, 55.7%id, 40.3%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 : 5.0%us, 1.0%sy, 0.0%ni, 86.2%id, 7.4%wa, 0.0%hi, 0.3%si, 0.0%st
Cpu4 : 38.5%us, 5.4%sy, 0.3%ni, 0.0%id, 54.8%wa, 0.0%hi, 1.0%si, 0.0%st
Cpu5 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu6 : 0.3%us, 0.7%sy, 0.0%ni, 97.3%id, 1.7%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu7 : 5.4%us, 4.4%sy, 0.0%ni, 82.6%id, 7.7%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu8 : 1.7%us, 1.7%sy, 0.0%ni, 72.8%id, 23.8%wa, 0.0%hi, 0.0%si, 0.0%st

6. Refresh Unix Top Command Output On demand (or) Change Refresh Interval
By default, linux top command updates the output every 3.0 seconds. When you want to update the output on-demand, press space bar.
To change the output update frequency, press d in interactive mode, and enter the time in seconds as shown below.
Change delay from 3.0 to: 10
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1309 geek 23 0 2483m 1.7g 27m S 0 21.8 45:31.32 gagent
1882 geek 25 0 2485m 1.7g 26m S 0 21.7 22:38.97 gagent

7. Highlight Running Processes in the Linux Top Command Output – Press z or b
Press z or b, which will highlight all running process as shown below.
Highlight Running Process on Ubuntu Linux Using Top Command
Fig: Ubuntu Linux – top command highlights running process

8. Display Absolute Path of the Command and its Arguments – Press c
Press c which will show / hide command absolute path, and arguments as shown below.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1309 geek 23 0 2483m 1.7g 27m S 0 21.8 45:31.32 /usr/sbin/gagent
1882 geek 25 0 2485m 1.7g 26m S 0 21.7 22:38.97 /usr/sbin/gagent -l 0 -u pre

9. Quit Top Command After a Specified Number of Iterations Using top -n
Until you press q, top continuously displays the output. If you would like to view only a certain iteration and want the top to exit automatically use -n option as shown below.
The following example will show 2 iterations of unix top command output and exit automatically
$ top -n 2

10. Executing Unix Top Command in Batch Mode
If you want to execute top command in the batch mode use option -b as shown below.
$ top -b -n 1
Note: This option is very helpful when you want to capture the unix top command output to a readable text file as we discussed earlier.

11. Split Top Output into Multiple Panels – Press A
To display multiple views of top command output on the terminal, press A. You can cycle through these windows using ‘a’. This is very helpful, when you can sort the output on multiple windows using different top output columns.

12. Get Top Command Help from Command Line and Interactively
Get a quick command line option help using top -h as shown below.
$ top -h
top: procps version 3.2.0
usage: top -hv | -bcisS -d delay -n iterations [-u user | -U user] -p pid [,pid ...]
Press h while top command is running, which will display help for interactive top commands.
Help for Interactive Commands – procps version 3.2.0
Window 1:Def: Cumulative mode Off. System: Delay 3.0 secs; Secure mode Off.
Z,B Global: ‘Z’ change color mappings; ‘B’ disable/enable bold
l,t,m Toggle Summaries: ‘l’ load avg; ‘t’ task/cpu stats; ‘m’ mem info
1,I Toggle SMP view: ’1′ single/separate states; ‘I’ Irix/Solaris mode
……….

13. Decrease Number of Processes Displayed in Top Output – Press n
Press n in the Interactive mode, which prompts for a number and shows only that. Following example will display only 2 process as a time.
Maximum tasks = 0, change to (0 is unlimited): 2
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1309 geek 23 0 2483m 1.7g 27m S 0 21.8 45:31.32 gagent
1882 geek 25 0 2485m 1.7g 26m S 0 21.7 22:38.97 gagent

14. Toggle Top Header to Increase Number of Processes Displayed
By default top displays total number process based on the window height. If you like to see additional process you might want to eliminate some of the top header information.
Following is the default header information provided by top.
top – 23:47:32 up 179 days, 3:36, 1 user, load average: 0.01, 0.03, 0.00
Tasks: 67 total, 1 running, 66 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.7% user, 1.2% system, 0.0% nice, 98.0% idle
Mem: 1017136k total, 954652k used, 62484k free, 138280k buffers
Swap: 3068404k total, 22352k used, 3046052k free, 586576k cached
Press l – to hide / show the load average. 1st header line.
Press t – to hide / show the CPU states. 2nd and 3rd header line.
Press m – to hide / show the memory information. 4th and 5th line.

15. Save Top Configuration Settings – Press W
If you’ve made any interactive top command configurations suggested in the above examples, you might want to save those for all future top command output. Once you’ve saved the top configuration, next time when you invoke the top command all your saved top configuration options will be used automatically.
To save the top configuration, press W, which will write the configuration files to ~/.toprc. This will display the write confirmation message as shown below.
top – 23:47:32 up 179 days, 3:36, 1 user, load average: 0.01, 0.03, 0.00
Tasks: 67 total, 1 running, 66 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.7% user, 1.2% system, 0.0% nice, 98.0% idle
Mem: 1017136k total, 954652k used, 62484k free, 138280k buffers
Swap: 3068404k total, 22352k used, 3046052k free, 586576k cached
Wrote configuration to ‘/home/ramesh/.toprc’

Thursday 2 August 2012

Reset MySQL root password

Steps to reset MySQL root password
[root@nischal ~]# mysql -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user ‘root’@'localhost’ (using password: YES)
Check if the Mysql process is running (its running here)
[root@nischal ~]# ps -ef | grep mysql
mysql 1348 1 0 19:44 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe –basedir=/usr
mysql 1616 1348 0 19:44 ? 00:00:09 /usr/libexec/mysqld –basedir=/usr –datadir=/var/lib/mysql –plugin-dir=/usr/lib/mysql/plugin –log-error=/var/log/mysqld.log –pid-file=/var/run/mysqld/mysqld.pid –socket=/var/lib/mysql/mysql.sock
root 4069 2569 0 22:23 pts/0 00:00:00 grep –color=auto mysql
Time to reset the Mysql root password !!
Stop the mysql service
———————————————————————————————————————————–
[root@nischal init.d]# mysqld_safe –skip-grant-tables &
[2] 4706
[root@nischal init.d]# 120802 22:36:21 mysqld_safe Logging to ‘/var/log/mysqld.log’.
120802 22:36:21 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
————————————————————————————————————————————
[root@nischal init.d]# mysql -u root
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.5.25a MySQL Community Server (GPL) by Remi
Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.
mysql>
————————————————————————————————————————————-
mysql> use mysql;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
————————————————————————————————————————————–
mysql> update user set password=PASSWORD(“Admin@123_”) where User=’root’;
Query OK, 3 rows affected (0.33 sec)
Rows matched: 3 Changed: 3 Warnings: 0
————————————————————————————————————————————-
[root@nischal init.d]# /etc/init.d/mysqld stop
[root@nischal init.d]# /etc/init.d/mysqld start
—————————————————————————————————————————————
Connect to Mysql DB using new password:
[root@nischal ~]# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.5.25a MySQL Community Server (GPL) by Remi
Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.
mysql>
—————————————————————————————————————————————–
mysql> show databases;
+——————–+
| Database |
+——————–+
| information_schema |
| mydatabase |
| mysql |
| performance_schema |
+——————–+
4 rows in set (0.00 sec)
——————————————————————————————————————————————-
Enjoy! Now you have root access. :)

Wednesday 1 August 2012

Finding Load average on Linux

Load Average of a Linux server can be find in various ways and they are shown below. The 3 values which it displays are the system load averages for the past 1, 5, and 15 minutes.

# uptime
07:18:07 up 57 days, 8:04, 8 users, load average: 1.82, 1.24, 0.97

# cat /proc/loadavg
1.67 1.22 0.97 1/283 24487

# w
07:18:19 up 57 days, 8:04, 8 users, load average: 1.56, 1.21, 0.97
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
adevaraj pts/0 gtlslhh.tcprod. 01:41 0.00s 0.10s 0.03s sshd: adevaraju [priv]
tkhalef pts/3 gtwash01.tcprod. Mon05 45:35 0.38s 0.03s sshd: tkhalef [priv]
nimmika pts/4 220.10.245.82 05:49 1:28m 0.02s 0.02s -bash


# top
top – 07:19:14 up 57 days, 8:05, 8 users, load average: 1.57, 1.28, 1.00
Tasks: 284 total, 1 running, 281 sleeping, 0 stopped, 2 zombie
Cpu(s): 2.2%us, 0.7%sy, 0.0%ni, 95.9%id, 1.2%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 16634296k total, 14157704k used, 2476592k free, 438604k buffers
Swap: 5406712k total, 56k used, 5406656k free, 12835252k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
21914 apache 15 0 0 0 0 Z 7.7 0.0 0:02.06 httpd
24774 root 15 0 2508 1072 720 R 3.8 0.0 0:00.02 top
20504 apache 15 0 50596 24m 4800 S 1.9 0.1 0:03.89 httpd
1 root 15 0 2072 616 532 S 0.0 0.0 0:08.33 init

Monday 30 July 2012

compgen: An Awesome Command To List All Linux Commands

compgen is bash built-in command and it will show all available commands, aliases, and functions for you. The syntax is:
compgen option

compgen command examples

To list all the commands available to you, enter:
compgen -c
Sample outputs:
ls
if
then
else
elif
fi
....
mahjongg
sol
gtali
sl-h
gnobots2
gnotravex
iagno
fortune
gnect
gnome-sudoku
LS
glchess
gnuchess
gnuchessx

You can search or count the commands:
compgen -c | grep find
compgen -c | wc -l
echo "$USER user can run $(compgen -c | wc -l) commands on $HOSTNAME."
Sample outputs:
vivek user can run 3436 commands on wks01.

To list all the bash shell aliases available to you, enter:
compgen -a
Sample outputs:
..
...
....
.....
.4
.5
bc
cd..
chgrp
chmod
chown
cp
dnstop
egrep
ethtool
fastping
fgrep
grep
iftop
l.
ll
ln
ls
mcdflush
mcdshow
mcdstats
mount
mv
pscpu
pscpu10
psmem
psmem10
rm
tcpdump
update
updatey
vnstat
wget
which


Other options are as follows:
 
########################################
# Task: show all the bash built-ins
########################################
compgen -b
########################################
# Task: show all the bash keywords
########################################
compgen -k
########################################
# Task: show all the bash functions
########################################
compgen -A function

How To Masquerade On Linux (Internet Connection Sharing)

IP Masquerade is a networking function in Linux similar to the one-to-many (1:Many) NAT (Network Address Translation) servers found in many commercial firewalls and network routers. For example, if a Linux host is connected to the Internet via PPP, Ethernet, etc., the IP Masquerade feature allows other “internal” computers connected to this Linux box (via PPP, Ethernet, etc.) to also reach the Internet as well. Linux IP Masquerading allows for this functionality even though these internal machines don’t have an officially assigned IP address.
MASQ allows a set of machines to invisibly access the Internet via the MASQ gateway. To other machines on the Internet, the outgoing traffic will appear to be from the IP MASQ Linux server itself. In addition to the added functionality, IP Masquerade provides the foundation to create a HEAVILY secured networking environment. With a well built firewall, breaking the security of a well configured masquerading system and internal LAN should be considerably difficult to accomplish.
Follow the following steps for performing masquerade:

Pre-Requirement

Machine 1 which is connected to WLAN(or dial-up connection) and also connected to machine 2 via LAN cable.
Machine 2 which is connected to machine 1 Via Lan cable

Steps for Machine 1 (which is connected to internet using a wifi connection i.e wlan0 or dial-up connection)

1. Open System -> Administration -> Firewall, and under ‘Masquerading’ select wlan0 (scroll down, if it’s not there add it), then click ‘Apply’
2. Set eth0 to ip 192.168.1.2, either via Network Manager or from the command line with
Code:
ifconfig eth0 192.168.1.2/24
(Assuming your wifi connection is in 192.168.1.* range)

Machine 2  (Connected to Machine 1 via ethernet cable from eth0)

1. Stop NetworkManager service since it’s easier without:
Code:
service NetworkManager stop
2. Set eth0 ip address to  and default gateway to 192.168.0.2:
Code:
ifconfig eth0 192.168.0.2
route add default gw 192.168.1.2
Check you can ping 192.168.1.2.
3. Now just set a DNS address in /etc/resolv.conf
Code:
nameserver 8.8.8.8
or
nameserver in the resolve.conf of Machine 1
now check. you can ping google.com.

Now enjoy the internet sharing. :) 

Friday 13 July 2012

rsync over ssh

rsync is used to perform the backup operation in UNIX / Linux. rsync is a great tool for backing up and restoring files. rsync is basically a synchronous tools which is used synchronize the files and directories from one location to another in an effective way. rsync behaves much same as of rcp but it has many more options.

Important features of rsync

Speed: First time, rsync replicates the whole content between the source and destination directories. Next time, rsync transfers only the changed blocks or bytes to the destination location, which makes the transfer really fast.
Security: rsync allows encryption of data using ssh protocol during transfer.
Less Bandwidth: rsync uses compression and decompression of data block by block at the sending and receiving end respectively. So the bandwidth used by rsync will be always less compared to other file transfer protocols.
Privileges: No special privileges are required to install and execute rsync
Some of the additional features of rsync are:
  • support for copying links, devices, owners, groups and permissions
  • exclude and exclude-from options similar to GNU tar
  • a CVS exclude mode for ignoring the same files that CVS would ignore
  • can use any transparent remote shell, including rsh or ssh
  • does not require root privileges
  • pipelining of file transfers to minimize latency costs
  • support for anonymous or authenticated rsync servers (ideal for mirroring)

GENERAL

There are six different ways of using rsync. They are:
  • for copying local files. This is invoked when neither source nor destination path contains a : separator
  • for copying from the local machine to a remote machine using a remote shell program as the transport (such as rsh or ssh). This is invoked when the destination path contains a single : separator.
  • for copying from a remote machine to the local machine using a remote shell program. This is invoked when the source contains a : separator.
  • for copying from a remote rsync server to the local machine. This is invoked when the source path contains a :: separator or a rsync:// URL.
  • for copying from the local machine to a remote rsync server. This is invoked when the destination path contains a :: separator.
  • for listing files on a remote machine. This is done the same way as rsync transfers except that you leave off the local destination.
Note that in all cases (other than listing) at least one of the source and destination paths must be local.

Syntax

$ rsync options source destination
Source and destination could be either local or remote. In case of remote, specify the login name, remote server name and location.

Examples:

sync example for backing up/copying from remote server to local Linux computer:
rsync -arv user01@server.example.com:/home/user01/ /home/testuser/user01backup/
(/home/testuser/user01backup/ is a local Linux folder path)
Here is what the “-arv” option does:
a = archive – means it preserves permissions (owners, groups), times, symbolic links, and devices.
r = recursive – means it copies directories and sub directories
v = verbose – means that it prints on the screen what is being copied

rsync -rv user01@server.example.com:/home/user01/ /home/testuser/user01backup/
This example will copy folders and sub-folder but will not preserve permissions, times and symbolic links during the transfer

sync -arv –exclude ‘logs’ user01@serve.example.com:/home/user01/ /Users/testuser/user01backup/
This example will copy everything (folders, sub-folders, etc), will preserver permissions, times, links, but will exclude the folder /home/user01/logs/ from being copied

Use of “/” at the end of path:
When using “/” at the end of source, rsync will copy the content of the last folder.
When not using “/” at the end of source, rsync will copy the last folder and the content of the folder.
When using “/” at the end of destination, rsync will paste the data inside the last folder.
When not using “/” at the end of destination, rsync will create a folder with the last destination folder name and paste the data inside that folder.

Thursday 12 July 2012

rsync – Great Backup Tool

sync is used to perform the backup operation in UNIX / Linux. rsync is a great tool for backing up and restoring files. rsync is basically a synchronous tools which is used synchronize the files and directories from one location to another in an effective way. rsync behaves much same as of rcp but it has many more options.

Important features of rsync

Speed: First time, rsync replicates the whole content between the source and destination directories. Next time, rsync transfers only the changed blocks or bytes to the destination location, which makes the transfer really fast.
Security: rsync allows encryption of data using ssh protocol during transfer.
Less Bandwidth: rsync uses compression and decompression of data block by block at the sending and receiving end respectively. So the bandwidth used by rsync will be always less compared to other file transfer protocols.
Privileges: No special privileges are required to install and execute rsync
Some of the additional features of rsync are:
  • support for copying links, devices, owners, groups and permissions
  • exclude and exclude-from options similar to GNU tar
  • a CVS exclude mode for ignoring the same files that CVS would ignore
  • can use any transparent remote shell, including rsh or ssh
  • does not require root privileges
  • pipelining of file transfers to minimize latency costs
  • support for anonymous or authenticated rsync servers (ideal for mirroring)

GENERAL

There are six different ways of using rsync. They are:
  • for copying local files. This is invoked when neither source nor destination path contains a : separator
  • for copying from the local machine to a remote machine using a remote shell program as the transport (such as rsh or ssh). This is invoked when the destination path contains a single : separator.
  • for copying from a remote machine to the local machine using a remote shell program. This is invoked when the source contains a : separator.
  • for copying from a remote rsync server to the local machine. This is invoked when the source path contains a :: separator or a rsync:// URL.
  • for copying from the local machine to a remote rsync server. This is invoked when the destination path contains a :: separator.
  • for listing files on a remote machine. This is done the same way as rsync transfers except that you leave off the local destination.
Note that in all cases (other than listing) at least one of the source and destination paths must be local.

Syntax

$ rsync options source destination
Source and destination could be either local or remote. In case of remote, specify the login name, remote server name and location.

Examples:

sync example for backing up/copying from remote server to local Linux computer:
rsync -arv user01@server.example.com:/home/user01/ /home/testuser/user01backup/
(/home/testuser/user01backup/ is a local Linux folder path)
Here is what the “-arv” option does:
a = archive – means it preserves permissions (owners, groups), times, symbolic links, and devices.
r = recursive – means it copies directories and sub directories
v = verbose – means that it prints on the screen what is being copied

rsync -rv user01@server.example.com:/home/user01/ /home/testuser/user01backup/
This example will copy folders and sub-folder but will not preserve permissions, times and symbolic links during the transfer

sync -arv –exclude ‘logs’ user01@serve.example.com:/home/user01/ /Users/testuser/user01backup/
This example will copy everything (folders, sub-folders, etc), will preserver permissions, times, links, but will exclude the folder /home/user01/logs/ from being copied

Use of “/” at the end of path:
When using “/” at the end of source, rsync will copy the content of the last folder.
When not using “/” at the end of source, rsync will copy the last folder and the content of the folder.
When using “/” at the end of destination, rsync will paste the data inside the last folder.
When not using “/” at the end of destination, rsync will create a folder with the last destination folder name and paste the data inside that folder.

Saturday 30 June 2012

Apache And Working Status Code


Apache, otherwise known as Apache HTTP Server, is an open-source web server platform, which guarantees the online availability of the majority of the websites active today. The server is aimed at serving a great deal of widely popular modern web platforms/operating systems such as Unix, Windows, Linux, Solaris, Novell NetWare, FreeBSD, Mac OS X, Microsoft Windows, OS/2, etc.
The Apache server has been the most popular web server on the Internet since April 1996. It is by no means considered a platform criterion for the development and evaluation of other successful web servers.
Apache Server Versions
Since its initial launch the web server has undergone a number of improvements, which led to the release of several versions. All of the versions are accompanied by comprehensive documentation archives.
Apache 1.3
Apache 1.3 boasts a great deal of improvements over 1.2, the most noteworthy of them being - useful configurable files, Windows and Novell NetWare support, DSO support, APXS tool and others.
Apache 2.0
Apache 2.0 differs from the previous versions by the much re-written code, which has considerably simplified its configuration and boosted its efficiency. It supports Ipv6, Unix threading, other protocols such as mod_echo. This version also offers a new compilation system and multi-language error messaging.
Apache 2.2
Apache 2.2 came out in 2006 and offers new and more flexible modules for user authentication and proxy caching, support for files exceeding 2 GB, as well as SQL support.
Working Status Codes of Apache
When we request something from Apache it responds with the codes depending upon the result of the request. These codes have specific meaning which helps in understanding the  working of Apache. The first digit of the status code specifies one of five classes of response.
Apache codes are divided into five classes depending uopn their category.
1. 1xx (Informational)
2. 2xx (Success)
3. 3xx (Redirection)
4. 4xx (Client Error)
5. 5xx (Server Error)

1xx Informational

100 Continue
ErrorDocument Continue | Sample 100 Continue
This means that the server has received the request headers, and that the client should proceed to send the request body (in case of a request which needs to be sent; for example, a POST request). If the request body is large, sending it to a server when a request has already been rejected based upon inappropriate headers is inefficient. To have a server check if the request could be accepted based on the requests headers alone, a client must send Expect: 100-continue as a header in its initial request (see RFC 2616 14.20 Expect header) and check if a 100 Continue status code is received in response before continuing (or receive 417 Expectation Failed and not continue).
101 Switching Protocols
ErrorDocument Switching Protocols | Sample 101 Switching Protocols
This means the requester has asked the server to switch protocols and the server is acknowledging that it will do so.[3]
102 Processing
ErrorDocument Processing | Sample 102 Processing
(WebDAV) - (RFC 2518 )

2xx Success

201 Created
ErrorDocument Created | Sample 201 Created
The request has been fulfilled and resulted in a new resource being created.
202 Accepted
ErrorDocument Accepted | Sample 202 Accepted
The request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place.
203 Non-Authoritative Information
ErrorDocument Non-Authoritative Information | Sample 203 Non-Authoritative Information
The server successfully processed the request, but is returning information that may be from another source.
204 No Content
ErrorDocument No Content | Sample 204 No Content
The server successfully processed the request, but is not returning any content.
205 Reset Content
ErrorDocument Reset Content | Sample 205 Reset Content
The server successfully processed the request, but is not returning any content. Unlike a 204 response, this response requires that the requester reset the document view.
206 Partial Content
ErrorDocument Partial Content | Sample 206 Partial Content
The server is delivering only part of the resource due to a range header sent by the client. This is used by tools like wget to enable resuming of interrupted downloads, or split a download into multiple simultaneous streams.
207 Multi-Status
ErrorDocument Multi-Status | Sample 207 Multi-Status
(WebDAV) - The message body that follows is an XML message and can contain a number of separate response codes, depending on how many sub-requests were made.

226 IM Used
ErrorDocument IM Used | Sample 226 IM Used
The server has fulfilled a GET request for the resource, and the response is a representation of the result of one or more instance-manipulations applied to the current instance. The actual current instance might not be available except by combining this response with other previous or future responses, as appropriate for the specific instance-manipulation(s).

3xx Redirection

300 Multiple Choices
ErrorDocument Multiple Choices | Sample 300 Multiple Choices
Indicates multiple options for the resource that the client may follow. It, for instance, could be used to present different format options for video, list files with different extensions, or word sense disambiguation.
301 Moved Permanently
ErrorDocument Moved Permanently | Sample 301 Moved Permanently
This and all future requests should be directed to the given URI.
302 Found
ErrorDocument Found | Sample 302 Found
This is the most popular redirect code[citation needed], but also an example of industrial practice contradicting the standard. HTTP/1.0 specification (RFC 1945 ) required the client to perform a temporary redirect (the original describing phrase was "Moved Temporarily"), but popular browsers implemented it as a 303 See Other. Therefore, HTTP/1.1 added status codes 303 and 307 to disambiguate between the two behaviours. However, the majority of Web applications and frameworks still use the 302 status code as if it were the 303.
303 See Other
ErrorDocument See Other | Sample 303 See Other
The response to the request can be found under another URI using a GET method. When received in response to a PUT, it should be assumed that the server has received the data and the redirect should be issued with a separate GET message.
304 Not Modified
ErrorDocument Not Modified | Sample 304 Not Modified
Indicates the resource has not been modified since last requested. Typically, the HTTP client provides a header like the If-Modified-Since header to provide a time against which to compare. Utilizing this saves bandwidth and reprocessing on both the server and client, as only the header data must be sent and received in comparison to the entirety of the page being re-processed by the server, then resent using more bandwidth of the server and client.
305 Use Proxy
ErrorDocument Use Proxy | Sample 305 Use Proxy
Many HTTP clients (such as Mozilla[4] and Internet Explorer) do not correctly handle responses with this status code, primarily for security reasons.
306 Switch Proxy
ErrorDocument Switch Proxy | Sample 306 Switch Proxy
No longer used.
307 Temporary Redirect
ErrorDocument Temporary Redirect | Sample 307 Temporary Redirect
In this occasion, the request should be repeated with another URI, but future requests can still use the original URI. In contrast to 303, the request method should not be changed when reissuing the original request. For instance, a POST request must be repeated using another POST request.

4xx Client Error

400 Bad Request
ErrorDocument Bad Request | Sample 400 Bad Request
The request contains bad syntax or cannot be fulfilled.
401 Unauthorized
ErrorDocument Unauthorized | Sample 401 Unauthorized
Similar to 403 Forbidden, but specifically for use when authentication is possible but has failed or not yet been provided. The response must include a WWW-Authenticate header field containing a challenge applicable to the requested resource. See Basic access authentication and Digest access authentication.
402 Payment Required
ErrorDocument Payment Required | Sample 402 Payment Required
The original intention was that this code might be used as part of some form of digital cash or micropayment scheme, but that has not happened, and this code has never been used.
403 Forbidden
ErrorDocument Forbidden | Sample 403 Forbidden
The request was a legal request, but the server is refusing to respond to it. Unlike a 401 Unauthorized response, authenticating will make no difference.
404 Not Found
ErrorDocument Not Found | Sample 404 Not Found
The requested resource could not be found but may be available again in the future. Subsequent requests by the client are permissible.
405 Method Not Allowed
ErrorDocument Method Not Allowed | Sample 405 Method Not Allowed
A request was made of a resource using a request method not supported by that resource; for example, using GET on a form which requires data to be presented via POST, or using PUT on a read-only resource.
406 Not Acceptable
ErrorDocument Not Acceptable | Sample 406 Not Acceptable
The requested resource is only capable of generating content not acceptable according to the Accept headers sent in the request.
407 Proxy Authentication Required
ErrorDocument Proxy Authentication Required | Sample 407 Proxy Authentication Required
Required
408 Request Timeout
ErrorDocument Request Timeout | Sample 408 Request Timeout
The server timed out waiting for the request.
409 Conflict
ErrorDocument Conflict | Sample 409 Conflict
Indicates that the request could not be processed because of conflict in the request, such as an edit conflict.
410 Gone
ErrorDocument Gone | Sample 410 Gone
Indicates that the resource requested is no longer available and will not be available again. This should be used when a resource has been intentionally removed; however, it is not necessary to return this code and a 404 Not Found can be issued instead. Upon receiving a 410 status code, the client should not request the resource again in the future. Clients such as search engines should remove the resource from their indexes.
411 Length Required
ErrorDocument Length Required | Sample 411 Length Required
The request did not specify the length of its content, which is required by the requested resource.
412 Precondition Failed
ErrorDocument Precondition Failed | Sample 412 Precondition Failed
The server does not meet one of the preconditions that the requester put on the request.
413 Request Entity Too Large
ErrorDocument Request Entity Too Large | Sample 413 Request Entity Too Large
The request is larger than the server is willing or able to process.
414 Request-URI Too Long
ErrorDocument Request-URI Too Long | Sample 414 Request-URI Too Long
The URI provided was too long for the server to process.
415 Unsupported Media Type
ErrorDocument Unsupported Media Type | Sample 415 Unsupported Media Type
The request did not specify any media types that the server or resource supports. For example the client specified that an image resource should be served as image/svg+xml, but the server cannot find a matching version of the image.
416 Requested Range Not Satisfiable
ErrorDocument Requested Range Not Satisfiable | Sample 416 Requested Range Not Satisfiable
The client has asked for a portion of the file, but the server cannot supply that portion (for example, if the client asked for a part of the file that lies beyond the end of the file).
417 Expectation Failed
ErrorDocument Expectation Failed | Sample 417 Expectation Failed
The server cannot meet the requirements of the Expect request-header field.
418 I'm a teapot
ErrorDocument I'm a teapot | Sample 418 I'm a teapot
The HTCPCP server is a teapot. The responding entity MAY be short and stout. Defined by the April Fools specification RFC 2324. See Hyper Text Coffee Pot Control Protocol for more information.
422 Unprocessable Entity
ErrorDocument Unprocessable Entity | Sample 422 Unprocessable Entity
(WebDAV) (RFC 4918 ) - The request was well-formed but was unable to be followed due to semantic errors.
423 Locked
ErrorDocument Locked | Sample 423 Locked
(WebDAV) (RFC 4918 ) - The resource that is being accessed is locked
424 Failed Dependency
ErrorDocument Failed Dependency | Sample 424 Failed Dependency
(WebDAV) (RFC 4918 ) - The request failed due to failure of a previous request (e.g. a PROPPATCH).
425 Unordered Collection
ErrorDocument Unordered Collection | Sample 425 Unordered Collection
Defined in drafts of WebDav Advanced Collections, but not present in "Web Distributed Authoring and Versioning (WebDAV) Ordered Collections Protocol" (RFC 3648).
426 Upgrade Required
ErrorDocument Upgrade Required | Sample 426 Upgrade Required
(RFC 2817 ) - The client should switch to TLS/1.0.
449 Retry With
ErrorDocument Retry With | Sample 449 Retry With
A Microsoft extension. The request should be retried after doing the appropriate action.

5xx Server Error

501 Not Implemented
ErrorDocument Not Implemented | Sample 501 Not Implemented
The server either does not recognise the request method, or it lacks the ability to fulfil the request.
502 Bad Gateway
ErrorDocument Bad Gateway | Sample 502 Bad Gateway
The server was acting as a gateway or proxy and received an invalid response from the upstream server.
503 Service Unavailable
ErrorDocument Service Unavailable | Sample 503 Service Unavailable
The server is currently unavailable (because it is overloaded or down for maintenance). Generally, this is a temporary state.
504 Gateway Timeout
ErrorDocument Gateway Timeout | Sample 504 Gateway Timeout
The server was acting as a gateway or proxy and did not receive a timely request from the upstream server.
505 HTTP Version Not Supported
ErrorDocument HTTP Version Not Supported | Sample 505 HTTP Version Not Supported
The server does not support the HTTP protocol version used in the request.
506 Variant Also Negotiates
ErrorDocument Variant Also Negotiates | Sample 506 Variant Also Negotiates
(RFC 2295 ) - Transparent content negotiation for the request, results in a circular reference.
507 Insufficient Storage
ErrorDocument Insufficient Storage | Sample 507 Insufficient Storage
(WebDAV) (RFC 4918 )
509 Bandwidth Limit Exceeded
ErrorDocument Bandwidth Limit Exceeded | Sample 509 Bandwidth Limit Exceeded
(Apache bw/limited extension) - This status code, while used by many servers, is not specified in any RFCs.
510 Not Extended
ErrorDocument Not Extended | Sample 510 Not Extended
(RFC 2774 ) - Further extensions to the request are required for the server to fulfil it.





Wednesday 27 June 2012

SAR - Monitoring Tool

Introduction:


Sar basically used for monitoring the performance of Linux/Unix based system. Sar generate the stats for the CPU usage, RAM usage and load average of the server and stores them in a file at regular interval. sar collects and displays ALL system activities statistics.By default, the command without an option displays CPU stats of the current day.
                                 Sar is part of the sysstat package. Linux distributions provide sar through the sysstat package.

Note:

SAR stores its output to the /var/adm/sa/sadd file, where the dd parameter indicates the current day.


Synatx of sar command is:



sar [-flags] [ -e time ] [ -f filename ] [-i sec ] [ -s time ]

-f  ---   filename uses filename as the data source for sar . Default is the current daily data file
 /var/adm/sa/sadd. 
-e ---   time Selects data up to time . Default is 18:00.
-i  ---   sec Selects data at intervals as close as possible to sec seconds.


flags:

-aReport use of file access system routines: iget/s, namei/s, dirblk/s
-AReport all data. Equivalent to -abcdgkmpqruvwy.
-bReport buffer activity:bread/s, bwrit/s
transfers per second of data between system buffers and disk or other block devices.
lread/s, lwrit/s
accesses of system buffers.
%rcache, %wcache
cache hit ratios, that is, (1-bread/lread) as a percentage.
pread/s, pwrit/s
transfers using raw (physical) device mechanism.
-cReport system calls:scall/s system calls of all types.
sread/s, swrit/s, fork/s, exec/s
specific system calls.
rchar/s, wchar/s
characters transferred by read and write system calls. No incoming or outgoing exec and fork calls are reported.
-dReport activity for each block device (for example, disk or tape drive) with the exception of XDC disks and tape drives. When data is displayed, the device specification dsk- is generally used to represent a disk drive. The device specification used to represent a tape drive is machine dependent. The activity data reported is: %busy, avque
portion of time device was busy servicing a transfer request, average number of requests outstanding during that time.
read/s, write/s, blks/s
number of read/write transfers from or to device, number of bytes transferred in 512-byte units.
avwait average wait time in milliseconds.
avserv average service time in milliseconds.
-gReport paging activities:pgout/s
page-out requests per second.
ppgout/s
pages paged-out per second.
pgfree/s
pages per second placed on the free list by the page stealing daemon.
pgscan/s
pages per second scanned by the page stealing daemon.
%ufs_ipf
the percentage of UFS inodes taken off the freelist by iget which had reusable pages associated with them. These pages are flushed and cannot be reclaimed by processes. Thus, this is the percentage of igets with page flushes.
-kReport kernel memory allocation (KMA) activities:sml_mem, alloc, fail
information about the memory pool reserving and allocating space for small requests: the amount of memory in bytes KMA has for the small pool, the number of bytes allocated to satisfy requests for small amounts of memory, and the number of requests for small amounts of memory that were not satisfied (failed).
lg_mem, alloc, fail
information for the large memory pool (analogous to the information for the small memory pool).
ovsz_alloc, fail
the amount of memory allocated for oversize requests and the number of oversize requests which could not be satisfied (because oversized memory is allocated dynamically, there is not a pool).
-mReport message and semaphore activities:msg/s, sema/s
primitives per second.
-pReport paging activities:atch/s
page faults per second that are satisfied by reclaiming a page currently in memory (attaches per second).
pgin/s
page-in requests per second.
ppgin/s
 pages paged-in per second.
pflt/s
page faults from protection errors per second (illegal access to page) or "copy-on-writes".
vflt/s
address translation page faults per second (valid page not in memory).
slock/s
faults per second caused by software lock requests requiring physical I/O.
-qReport average queue length while occupied, and percent of time occupied:runq-sz, %runocc
run queue of processes in memory and runnable.
swpq-sz, %swpocc
these are no longer reported by sar .
-rReport unused memory pages and disk blocks:freemem average pages available to user processes.
freeswap disk blocks available for page swapping.
-uReport CPU utilization (the default):%usr, %sys, %wio, %idle
portion of time running in user mode, running in system mode, idle with some process waiting for block I/O, and otherwise idle.
-vReport status of process, i-node, file tables:

proc-sz, inod-sz, file-sz, lock-sz
entries/size for each table, evaluatedov
overflows that occur between sampling points for each table.
-wReport system swapping and switching activity:swpin/s, swpot/s, bswin/s, bswot/s
number of transfers and number of 512-byte units transferred for swapins and swapouts (including initial loading of some programs).
pswch/s
process switches.
-yReport TTY device activity:

rawch/s, canch/s, outch/s
input character rate, input character rate processed by canon, output character rate.rcvin/s, xmtin/s, mdmin/s
receive, transmit and modem interrupt rates.
-o filenameSave samples in file, filename, in binary format.
-e timeSelect data up to time . Default is 18:00.
-f filenameUse filename as the data source for sar . Default is the current daily data file /var/adm/sa/sadd.
-i secSelect data at intervals as close as possible to sec seconds.





Monday 25 June 2012

NRPE

NRPE - Monitoring Tool

Introduction:- 

NRPE is stand for Nagios Remote Plugin Executor, is used for the remote system monitoring by executing scripts on remote systems. NRPE is used to monitor "local" resources on remote systems (Linux/Unix). NRPE allows monitoring of Disk Usage, system's load or number of users currently logged in and much more.
                           Basically, NRPE can only monitor public services such as HTTP, FTP, etc., but it also work on client - server basis, you need install a daemon on the server machine ( machine on which you want to monitor), then setup your Nagios server to connect to the remote daemon to collect information about the remote machines. 


NOTE:-

           Using NSClient++ instead of NRPE on the remote host you can execute checks on Windows machines as well.


Figure :- NRPE


DESIGN OVERVIEW:

NRPE addon works in two different stages:
  • The check_nrpe plugin, which works on the local machine.
  • The NRPE daemon, which works on the remote LINUX/UNIX machine.
When Nagios needs to monitor resources of a remote machine:
  • Nagios will execute the check_nrpe plugin and tell it what service needs to be checked.
  • The check_nrpe plugin contact the NRPE daemon using a SSL-protected connection.
  • The NRPE daemon run the Nagios plugin to check the service or resource.
  • NRPE daemon transfer the result of the service check back to the check_nrpe plugin, which further transfer the check result to the Nagios process.


Sunday 24 June 2012

lsof - list of open files


lsof - List open files


lsof is stand for list open files. It list information about any file open by any process. It is linux/unix diagnose tool. It is great, when you are troubleshooting 
an issue and need more information about process or connection details. Linux treats most everything as a file. An open file may be a regular file, a directory, a block special file, a character special file, an executing text reference, a library, a stream or a network file (Internet socket, NFS file or UNIX domain socket.) A specific file or all the files in a file system may be selected by path. When a process or application interacts with these files it has to "open" them. Using this command you can dig into and see what your process is doing.




Show Your Network Connections

Show all connections with -i

lsof -i 
COMMAND  PID USER   FD   TYPE DEVICE SIZE NODE NAME
dhcpcd 6061 root 4u IPv4 4510 UDP *:bootpc
sshd 7703 root 3u IPv6  6499 TCP *:ssh (LISTEN)
sshd 7892 root 3u IPv6  6757 TCP 10.10.1.5:ssh->192.168.1.5:49901 (ESTABLISHED)


Show only TCP (works the same for UDP)

lsof -iTCP 
COMMAND  PID USER   FD   TYPE DEVICE SIZE NODE NAME
sshd 7703 root 3u IPv6 6499 TCP *:ssh (LISTEN)
sshd 7892 root 3u IPv6 6757 TCP 10.10.1.5:ssh->192.168.1.5:49901 (ESTABLISHED)


-i :port shows all networking related to a given port

lsof -i :22 
COMMAND  PID USER   FD   TYPE DEVICE SIZE NODE NAME
sshd 7703 root 3u  IPv6 6499 TCP *:ssh (LISTEN)
sshd 7892 root 3u  IPv6 6757 TCP 10.10.1.5:ssh->192.168.1.5:49901 (ESTABLISHED)

To show connections to a specific host, use @host

lsof -i@192.168.1.5 
sshd 7892 root 3u IPv6 6757 TCP 10.10.1.5:ssh->192.168.1.5:49901 (ESTABLISHED)

Show connections based on the host and the port using @host:port

lsof -i@192.168.1.5:22 
sshd 7892 root 3u IPv6 6757 TCP 10.10.1.5:ssh->192.168.1.5:49901 (ESTABLISHED)


Grepping for "LISTEN" shows what ports your system is waiting for connections on

lsof -i| grep LISTEN 
iTunes     400 daniel   16u  IPv4 0x4575228  0t0 TCP *:daap (LISTEN)


Grepping for "ESTABLISHED" shows current active connections

lsof -i| grep ESTABLISHED 
firefox-b 169 daniel  49u IPv4 0t0 TCP 1.2.3.3:1863->1.2.3.4:http (ESTABLISHED)

Working with Users, Processes, and Files

You can also get information on various users, processes, and files on your system using lsof:

Show what a given user has open using -u

lsof -u daniel 
-- snipped --
Dock 155 daniel  txt REG   14,2   2798436   823208 /usr/lib/libicucore.A.dylib
Dock 155 daniel  txt REG   14,2   1580212   823126 /usr/lib/libobjc.A.dylib
Dock 155 daniel  txt REG   14,2   2934184   823498 /usr/lib/libstdc++.6.0.4.dylib
Dock 155 daniel  txt REG   14,2    132008   823505 /usr/lib/libgcc_s.1.dylib
Dock 155 daniel  txt REG   14,2    212160   823214 /usr/lib/libauto.dylib
-- snipped --


See what files and network connections a command is using with -c

lsof -c syslog-ng 
COMMAND    PID USER   FD   TYPE     DEVICE    SIZE       NODE NAME
syslog-ng 7547 root  cwd    DIR    3,3    4096   2 /
syslog-ng 7547 root  rtd    DIR    3,3    4096   2 /
syslog-ng 7547 root  txt    REG    3,3  113524  1064970 /usr/sbin/syslog-ng
syslog-ng 7547 root  mem    REG    0,0   0 [heap] 
syslog-ng 7547 root  mem    REG    3,3  105435   850412 /lib/libpthread-2.4.so
syslog-ng 7547 root  mem    REG    3,3 1197180   850396 /lib/libc-2.4.so
syslog-ng 7547 root  mem    REG    3,3   59868   850413 /lib/libresolv-2.4.so
syslog-ng 7547 root  mem    REG    3,3   72784   850404 /lib/libnsl-2.4.so
syslog-ng 7547 root  mem    REG    3,3   32040   850414 /lib/librt-2.4.so
syslog-ng 7547 root  mem    REG    3,3  126163   850385 /lib/ld-2.4.so
-- snipped --


Pointing to a file shows what's interacting with that file

lsof /var/log/messages 
COMMAND    PID USER   FD   TYPE DEVICE   SIZE   NODE NAME
syslog-ng 7547 root    4w   REG    3,3 217309 834024 /var/log/messages


The -p switch lets you see what a given process ID has open, which is good for learning more about unknown processes

lsof -p 10075 
-- snipped --
sshd    10068 root  mem    REG    3,3   34808 850407 /lib/libnss_files-2.4.so
sshd    10068 root  mem    REG    3,3   34924 850409 /lib/libnss_nis-2.4.so
sshd    10068 root  mem    REG    3,3   26596 850405 /lib/libnss_compat-2.4.so
sshd    10068 root  mem    REG    3,3  200152 509940 /usr/lib/libssl.so.0.9.7
sshd    10068 root  mem    REG    3,3   46216 510014 /usr/lib/liblber-2.3
sshd    10068 root  mem    REG    3,3   59868 850413 /lib/libresolv-2.4.so
sshd    10068 root  mem    REG    3,3 1197180 850396 /lib/libc-2.4.so
sshd    10068 root  mem    REG    3,3   22168 850398 /lib/libcrypt-2.4.so
sshd    10068 root  mem    REG    3,3   72784 850404 /lib/libnsl-2.4.so
sshd    10068 root  mem    REG    3,3   70632 850417 /lib/libz.so.1.2.3
sshd    10068 root  mem    REG    3,3    9992 850416 /lib/libutil-2.4.so
-- snipped --


The -t option returns just a PID

lsof -t -c Mail 
350
ps aux | grep Mail
daniel 350 0.0 1.5 405980 31452 ?? S  Mon07PM 2:50.28 /Applications/Mail.app

Advanced Usage



Using-a allows you to combine search terms, so the query below says, "show me everything running as daniel connected to 1.1.1.1"

lsof -a -u daniel -i @1.1.1.1 
bkdr   1893 daniel 3u  IPv6 3456 TCP 10.10.1.10:1234->1.1.1.1:31337 (ESTABLISHED)


Using the -t and -c options together you can HUP processes

kill -HUP `lsof -t -c sshd`


You can also use the -t with -u to kill everything a user has open

kill -9 `lsof -t -u daniel`


lsof +L1 shows you all open files that have a link count less than 1, often indicative of a cracker trying to hide something

lsof +L1 
(hopefully nothing)







Thursday 21 June 2012

DOS and D-DOS Attack

The Difference Between a DoS and a DDoS


Dos and DDos Attacks both are almost similar in listening, but there is huge difference between both of them.

DOS Attack:- Dos stand for Daniel Of Service, in this type of attack a single machine may be directed to target a specific server, to a specific port or service on a targeted system, to a network or network component, to a firewall or to any other system component, to flood a server with packets (TCP / UDP) – The Objective of this attack is to ‘overload’ the servers bandwidth / other resources. This led to server inaccessible to others, which results in blocking the website. 


DDOS Attack:- A DDOS Attack is Distributed Daniel Of Service Attack, it is much similar to DOS Attack but the result are much dangerous/different than of the DOS attack. DDos attack occur from more than one source, from more than one location, or more than one internet connection,at the same time. It is an attack where multiple compromised system (which are usually infected with Virus, Malware, Trojan) are used to target a single server. Computers behind such attack are distributed around the world, and a part of botnet.

The main objective of both the attacks areto overload the server by  hundreds and thousands of hitting or requests. This cause the over usage of the server resources (such as MEMORY, CPU, etc.), and it is hard for the server to withstand a DDOS attack.

Wednesday 20 June 2012

AWK tutorial

Awk Introduction and Printing Operations


awk is a powful Unix command. It allows the user to manipulate files that are structured as columns of data and strings. Once you understand the basics of awk you will find that it is surprisingly useful. awk stands for the names of its authors “Aho, Weinberger, and Kernighan”
The Awk is mostly used for pattern scanning and processing. It searches one or more files to see if they contain lines that matches with the specified patterns and then perform associated actions.
Some of the key features of Awk are:
  • Awk views a text file as records and fields.
  • Like common programming language, Awk has variables, conditionals and loops
  • Awk has arithmetic and string operators.
  • Awk can generate formatted reports
Awk reads from a file or from its standard input, and outputs to its standard output. Awk does not get along with non-text files.


The basic syntax of AWK:
awk 'BEGIN {start_action} {action} END {stop_action}' filename


Here the actions in the begin block are performed before processing the file and the actions in the end block are performed after processing the file. The rest of the actions are performed while processing the file.

Syntax:

awk '/search pattern1/ {Actions}
     /search pattern2/ {Actions}' file
In the above awk syntax:
  • search pattern is a regular expression.
  • Actions – statement(s) to be performed.
  • several patterns and actions are possible in Awk.
  • file – Input file.
  • Single quotes around program is to avoid shell not to interpret any of its special characters.

Awk Working Methodology

  1. Awk reads the input files one line at a time.
  2. For each line, it matches with given pattern in the given order, if matches performs the corresponding action.
  3. If no pattern matches, no action will be performed.
  4. In the above syntax, either search pattern or action are optional, But not both.
  5. If the search pattern is not given, then Awk performs the given actions for each line of the input.
  6. If the action is not given, print all that lines that matches with the given patterns which is the default action.
  7. Empty braces with out any action does nothing. It wont perform default printing operation.
  8. Each statement in Actions should be delimited by semicolon.

Awk Examples

Create a file example with the following data. This file can be easily created using the output of ls -l.
-rw-r--r-- 1 owner owner 12 Jun  8 21:39 p1
-rw-r--r-- 1 owner owner 17 Jun  8 21:15 t1
-rw-r--r-- 1 owner owner 26 Jun  8 21:38 t2
-rw-r--r-- 1 owner owner 25 Jun  8 21:38 t3
-rw-r--r-- 1 owner owner 43 Jun  8 21:39 t4
-rw-r--r-- 1 owner owner 48 Jun  8 21:39 t5

This include rows and columns. Each column is separated with a single space and each row is separated with new line character. We will use this file as the input for the example discussed here. 

1. awk '{print $1}' example

Here $1 has a meaning. $1, $2, $3... represents the first, second, third columns... in a row respectively. This awk command will print the first column in each row as shown below.
-rw-r--r--
-rw-r--r--
-rw-r--r--
-rw-r--r--
-rw-r--r--
-rw-r--r--

To print the 4th and 6th columns in a file use awk '{print $4,$5}' example

Here the Begin and End blocks are not used in awk. So, the print command will be executed for each row it read from the file.


2. awk '{ if($9 == "t4") print $0;}' example


This awk command checks for the string "t4" in the 9th column and if it finds a match then it will print the entire line. The output of this awk command is


-rw-r--r-- 1 owner owner 43 Jun 8 21:39 t4