Total visit on this blog

Saturday 30 June 2012

Apache And Working Status Code


Apache, otherwise known as Apache HTTP Server, is an open-source web server platform, which guarantees the online availability of the majority of the websites active today. The server is aimed at serving a great deal of widely popular modern web platforms/operating systems such as Unix, Windows, Linux, Solaris, Novell NetWare, FreeBSD, Mac OS X, Microsoft Windows, OS/2, etc.
The Apache server has been the most popular web server on the Internet since April 1996. It is by no means considered a platform criterion for the development and evaluation of other successful web servers.
Apache Server Versions
Since its initial launch the web server has undergone a number of improvements, which led to the release of several versions. All of the versions are accompanied by comprehensive documentation archives.
Apache 1.3
Apache 1.3 boasts a great deal of improvements over 1.2, the most noteworthy of them being - useful configurable files, Windows and Novell NetWare support, DSO support, APXS tool and others.
Apache 2.0
Apache 2.0 differs from the previous versions by the much re-written code, which has considerably simplified its configuration and boosted its efficiency. It supports Ipv6, Unix threading, other protocols such as mod_echo. This version also offers a new compilation system and multi-language error messaging.
Apache 2.2
Apache 2.2 came out in 2006 and offers new and more flexible modules for user authentication and proxy caching, support for files exceeding 2 GB, as well as SQL support.
Working Status Codes of Apache
When we request something from Apache it responds with the codes depending upon the result of the request. These codes have specific meaning which helps in understanding the  working of Apache. The first digit of the status code specifies one of five classes of response.
Apache codes are divided into five classes depending uopn their category.
1. 1xx (Informational)
2. 2xx (Success)
3. 3xx (Redirection)
4. 4xx (Client Error)
5. 5xx (Server Error)

1xx Informational

100 Continue
ErrorDocument Continue | Sample 100 Continue
This means that the server has received the request headers, and that the client should proceed to send the request body (in case of a request which needs to be sent; for example, a POST request). If the request body is large, sending it to a server when a request has already been rejected based upon inappropriate headers is inefficient. To have a server check if the request could be accepted based on the requests headers alone, a client must send Expect: 100-continue as a header in its initial request (see RFC 2616 14.20 Expect header) and check if a 100 Continue status code is received in response before continuing (or receive 417 Expectation Failed and not continue).
101 Switching Protocols
ErrorDocument Switching Protocols | Sample 101 Switching Protocols
This means the requester has asked the server to switch protocols and the server is acknowledging that it will do so.[3]
102 Processing
ErrorDocument Processing | Sample 102 Processing
(WebDAV) - (RFC 2518 )

2xx Success

201 Created
ErrorDocument Created | Sample 201 Created
The request has been fulfilled and resulted in a new resource being created.
202 Accepted
ErrorDocument Accepted | Sample 202 Accepted
The request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place.
203 Non-Authoritative Information
ErrorDocument Non-Authoritative Information | Sample 203 Non-Authoritative Information
The server successfully processed the request, but is returning information that may be from another source.
204 No Content
ErrorDocument No Content | Sample 204 No Content
The server successfully processed the request, but is not returning any content.
205 Reset Content
ErrorDocument Reset Content | Sample 205 Reset Content
The server successfully processed the request, but is not returning any content. Unlike a 204 response, this response requires that the requester reset the document view.
206 Partial Content
ErrorDocument Partial Content | Sample 206 Partial Content
The server is delivering only part of the resource due to a range header sent by the client. This is used by tools like wget to enable resuming of interrupted downloads, or split a download into multiple simultaneous streams.
207 Multi-Status
ErrorDocument Multi-Status | Sample 207 Multi-Status
(WebDAV) - The message body that follows is an XML message and can contain a number of separate response codes, depending on how many sub-requests were made.

226 IM Used
ErrorDocument IM Used | Sample 226 IM Used
The server has fulfilled a GET request for the resource, and the response is a representation of the result of one or more instance-manipulations applied to the current instance. The actual current instance might not be available except by combining this response with other previous or future responses, as appropriate for the specific instance-manipulation(s).

3xx Redirection

300 Multiple Choices
ErrorDocument Multiple Choices | Sample 300 Multiple Choices
Indicates multiple options for the resource that the client may follow. It, for instance, could be used to present different format options for video, list files with different extensions, or word sense disambiguation.
301 Moved Permanently
ErrorDocument Moved Permanently | Sample 301 Moved Permanently
This and all future requests should be directed to the given URI.
302 Found
ErrorDocument Found | Sample 302 Found
This is the most popular redirect code[citation needed], but also an example of industrial practice contradicting the standard. HTTP/1.0 specification (RFC 1945 ) required the client to perform a temporary redirect (the original describing phrase was "Moved Temporarily"), but popular browsers implemented it as a 303 See Other. Therefore, HTTP/1.1 added status codes 303 and 307 to disambiguate between the two behaviours. However, the majority of Web applications and frameworks still use the 302 status code as if it were the 303.
303 See Other
ErrorDocument See Other | Sample 303 See Other
The response to the request can be found under another URI using a GET method. When received in response to a PUT, it should be assumed that the server has received the data and the redirect should be issued with a separate GET message.
304 Not Modified
ErrorDocument Not Modified | Sample 304 Not Modified
Indicates the resource has not been modified since last requested. Typically, the HTTP client provides a header like the If-Modified-Since header to provide a time against which to compare. Utilizing this saves bandwidth and reprocessing on both the server and client, as only the header data must be sent and received in comparison to the entirety of the page being re-processed by the server, then resent using more bandwidth of the server and client.
305 Use Proxy
ErrorDocument Use Proxy | Sample 305 Use Proxy
Many HTTP clients (such as Mozilla[4] and Internet Explorer) do not correctly handle responses with this status code, primarily for security reasons.
306 Switch Proxy
ErrorDocument Switch Proxy | Sample 306 Switch Proxy
No longer used.
307 Temporary Redirect
ErrorDocument Temporary Redirect | Sample 307 Temporary Redirect
In this occasion, the request should be repeated with another URI, but future requests can still use the original URI. In contrast to 303, the request method should not be changed when reissuing the original request. For instance, a POST request must be repeated using another POST request.

4xx Client Error

400 Bad Request
ErrorDocument Bad Request | Sample 400 Bad Request
The request contains bad syntax or cannot be fulfilled.
401 Unauthorized
ErrorDocument Unauthorized | Sample 401 Unauthorized
Similar to 403 Forbidden, but specifically for use when authentication is possible but has failed or not yet been provided. The response must include a WWW-Authenticate header field containing a challenge applicable to the requested resource. See Basic access authentication and Digest access authentication.
402 Payment Required
ErrorDocument Payment Required | Sample 402 Payment Required
The original intention was that this code might be used as part of some form of digital cash or micropayment scheme, but that has not happened, and this code has never been used.
403 Forbidden
ErrorDocument Forbidden | Sample 403 Forbidden
The request was a legal request, but the server is refusing to respond to it. Unlike a 401 Unauthorized response, authenticating will make no difference.
404 Not Found
ErrorDocument Not Found | Sample 404 Not Found
The requested resource could not be found but may be available again in the future. Subsequent requests by the client are permissible.
405 Method Not Allowed
ErrorDocument Method Not Allowed | Sample 405 Method Not Allowed
A request was made of a resource using a request method not supported by that resource; for example, using GET on a form which requires data to be presented via POST, or using PUT on a read-only resource.
406 Not Acceptable
ErrorDocument Not Acceptable | Sample 406 Not Acceptable
The requested resource is only capable of generating content not acceptable according to the Accept headers sent in the request.
407 Proxy Authentication Required
ErrorDocument Proxy Authentication Required | Sample 407 Proxy Authentication Required
Required
408 Request Timeout
ErrorDocument Request Timeout | Sample 408 Request Timeout
The server timed out waiting for the request.
409 Conflict
ErrorDocument Conflict | Sample 409 Conflict
Indicates that the request could not be processed because of conflict in the request, such as an edit conflict.
410 Gone
ErrorDocument Gone | Sample 410 Gone
Indicates that the resource requested is no longer available and will not be available again. This should be used when a resource has been intentionally removed; however, it is not necessary to return this code and a 404 Not Found can be issued instead. Upon receiving a 410 status code, the client should not request the resource again in the future. Clients such as search engines should remove the resource from their indexes.
411 Length Required
ErrorDocument Length Required | Sample 411 Length Required
The request did not specify the length of its content, which is required by the requested resource.
412 Precondition Failed
ErrorDocument Precondition Failed | Sample 412 Precondition Failed
The server does not meet one of the preconditions that the requester put on the request.
413 Request Entity Too Large
ErrorDocument Request Entity Too Large | Sample 413 Request Entity Too Large
The request is larger than the server is willing or able to process.
414 Request-URI Too Long
ErrorDocument Request-URI Too Long | Sample 414 Request-URI Too Long
The URI provided was too long for the server to process.
415 Unsupported Media Type
ErrorDocument Unsupported Media Type | Sample 415 Unsupported Media Type
The request did not specify any media types that the server or resource supports. For example the client specified that an image resource should be served as image/svg+xml, but the server cannot find a matching version of the image.
416 Requested Range Not Satisfiable
ErrorDocument Requested Range Not Satisfiable | Sample 416 Requested Range Not Satisfiable
The client has asked for a portion of the file, but the server cannot supply that portion (for example, if the client asked for a part of the file that lies beyond the end of the file).
417 Expectation Failed
ErrorDocument Expectation Failed | Sample 417 Expectation Failed
The server cannot meet the requirements of the Expect request-header field.
418 I'm a teapot
ErrorDocument I'm a teapot | Sample 418 I'm a teapot
The HTCPCP server is a teapot. The responding entity MAY be short and stout. Defined by the April Fools specification RFC 2324. See Hyper Text Coffee Pot Control Protocol for more information.
422 Unprocessable Entity
ErrorDocument Unprocessable Entity | Sample 422 Unprocessable Entity
(WebDAV) (RFC 4918 ) - The request was well-formed but was unable to be followed due to semantic errors.
423 Locked
ErrorDocument Locked | Sample 423 Locked
(WebDAV) (RFC 4918 ) - The resource that is being accessed is locked
424 Failed Dependency
ErrorDocument Failed Dependency | Sample 424 Failed Dependency
(WebDAV) (RFC 4918 ) - The request failed due to failure of a previous request (e.g. a PROPPATCH).
425 Unordered Collection
ErrorDocument Unordered Collection | Sample 425 Unordered Collection
Defined in drafts of WebDav Advanced Collections, but not present in "Web Distributed Authoring and Versioning (WebDAV) Ordered Collections Protocol" (RFC 3648).
426 Upgrade Required
ErrorDocument Upgrade Required | Sample 426 Upgrade Required
(RFC 2817 ) - The client should switch to TLS/1.0.
449 Retry With
ErrorDocument Retry With | Sample 449 Retry With
A Microsoft extension. The request should be retried after doing the appropriate action.

5xx Server Error

501 Not Implemented
ErrorDocument Not Implemented | Sample 501 Not Implemented
The server either does not recognise the request method, or it lacks the ability to fulfil the request.
502 Bad Gateway
ErrorDocument Bad Gateway | Sample 502 Bad Gateway
The server was acting as a gateway or proxy and received an invalid response from the upstream server.
503 Service Unavailable
ErrorDocument Service Unavailable | Sample 503 Service Unavailable
The server is currently unavailable (because it is overloaded or down for maintenance). Generally, this is a temporary state.
504 Gateway Timeout
ErrorDocument Gateway Timeout | Sample 504 Gateway Timeout
The server was acting as a gateway or proxy and did not receive a timely request from the upstream server.
505 HTTP Version Not Supported
ErrorDocument HTTP Version Not Supported | Sample 505 HTTP Version Not Supported
The server does not support the HTTP protocol version used in the request.
506 Variant Also Negotiates
ErrorDocument Variant Also Negotiates | Sample 506 Variant Also Negotiates
(RFC 2295 ) - Transparent content negotiation for the request, results in a circular reference.
507 Insufficient Storage
ErrorDocument Insufficient Storage | Sample 507 Insufficient Storage
(WebDAV) (RFC 4918 )
509 Bandwidth Limit Exceeded
ErrorDocument Bandwidth Limit Exceeded | Sample 509 Bandwidth Limit Exceeded
(Apache bw/limited extension) - This status code, while used by many servers, is not specified in any RFCs.
510 Not Extended
ErrorDocument Not Extended | Sample 510 Not Extended
(RFC 2774 ) - Further extensions to the request are required for the server to fulfil it.





Wednesday 27 June 2012

SAR - Monitoring Tool

Introduction:


Sar basically used for monitoring the performance of Linux/Unix based system. Sar generate the stats for the CPU usage, RAM usage and load average of the server and stores them in a file at regular interval. sar collects and displays ALL system activities statistics.By default, the command without an option displays CPU stats of the current day.
                                 Sar is part of the sysstat package. Linux distributions provide sar through the sysstat package.

Note:

SAR stores its output to the /var/adm/sa/sadd file, where the dd parameter indicates the current day.


Synatx of sar command is:



sar [-flags] [ -e time ] [ -f filename ] [-i sec ] [ -s time ]

-f  ---   filename uses filename as the data source for sar . Default is the current daily data file
 /var/adm/sa/sadd. 
-e ---   time Selects data up to time . Default is 18:00.
-i  ---   sec Selects data at intervals as close as possible to sec seconds.


flags:

-aReport use of file access system routines: iget/s, namei/s, dirblk/s
-AReport all data. Equivalent to -abcdgkmpqruvwy.
-bReport buffer activity:bread/s, bwrit/s
transfers per second of data between system buffers and disk or other block devices.
lread/s, lwrit/s
accesses of system buffers.
%rcache, %wcache
cache hit ratios, that is, (1-bread/lread) as a percentage.
pread/s, pwrit/s
transfers using raw (physical) device mechanism.
-cReport system calls:scall/s system calls of all types.
sread/s, swrit/s, fork/s, exec/s
specific system calls.
rchar/s, wchar/s
characters transferred by read and write system calls. No incoming or outgoing exec and fork calls are reported.
-dReport activity for each block device (for example, disk or tape drive) with the exception of XDC disks and tape drives. When data is displayed, the device specification dsk- is generally used to represent a disk drive. The device specification used to represent a tape drive is machine dependent. The activity data reported is: %busy, avque
portion of time device was busy servicing a transfer request, average number of requests outstanding during that time.
read/s, write/s, blks/s
number of read/write transfers from or to device, number of bytes transferred in 512-byte units.
avwait average wait time in milliseconds.
avserv average service time in milliseconds.
-gReport paging activities:pgout/s
page-out requests per second.
ppgout/s
pages paged-out per second.
pgfree/s
pages per second placed on the free list by the page stealing daemon.
pgscan/s
pages per second scanned by the page stealing daemon.
%ufs_ipf
the percentage of UFS inodes taken off the freelist by iget which had reusable pages associated with them. These pages are flushed and cannot be reclaimed by processes. Thus, this is the percentage of igets with page flushes.
-kReport kernel memory allocation (KMA) activities:sml_mem, alloc, fail
information about the memory pool reserving and allocating space for small requests: the amount of memory in bytes KMA has for the small pool, the number of bytes allocated to satisfy requests for small amounts of memory, and the number of requests for small amounts of memory that were not satisfied (failed).
lg_mem, alloc, fail
information for the large memory pool (analogous to the information for the small memory pool).
ovsz_alloc, fail
the amount of memory allocated for oversize requests and the number of oversize requests which could not be satisfied (because oversized memory is allocated dynamically, there is not a pool).
-mReport message and semaphore activities:msg/s, sema/s
primitives per second.
-pReport paging activities:atch/s
page faults per second that are satisfied by reclaiming a page currently in memory (attaches per second).
pgin/s
page-in requests per second.
ppgin/s
 pages paged-in per second.
pflt/s
page faults from protection errors per second (illegal access to page) or "copy-on-writes".
vflt/s
address translation page faults per second (valid page not in memory).
slock/s
faults per second caused by software lock requests requiring physical I/O.
-qReport average queue length while occupied, and percent of time occupied:runq-sz, %runocc
run queue of processes in memory and runnable.
swpq-sz, %swpocc
these are no longer reported by sar .
-rReport unused memory pages and disk blocks:freemem average pages available to user processes.
freeswap disk blocks available for page swapping.
-uReport CPU utilization (the default):%usr, %sys, %wio, %idle
portion of time running in user mode, running in system mode, idle with some process waiting for block I/O, and otherwise idle.
-vReport status of process, i-node, file tables:

proc-sz, inod-sz, file-sz, lock-sz
entries/size for each table, evaluatedov
overflows that occur between sampling points for each table.
-wReport system swapping and switching activity:swpin/s, swpot/s, bswin/s, bswot/s
number of transfers and number of 512-byte units transferred for swapins and swapouts (including initial loading of some programs).
pswch/s
process switches.
-yReport TTY device activity:

rawch/s, canch/s, outch/s
input character rate, input character rate processed by canon, output character rate.rcvin/s, xmtin/s, mdmin/s
receive, transmit and modem interrupt rates.
-o filenameSave samples in file, filename, in binary format.
-e timeSelect data up to time . Default is 18:00.
-f filenameUse filename as the data source for sar . Default is the current daily data file /var/adm/sa/sadd.
-i secSelect data at intervals as close as possible to sec seconds.





Monday 25 June 2012

NRPE

NRPE - Monitoring Tool

Introduction:- 

NRPE is stand for Nagios Remote Plugin Executor, is used for the remote system monitoring by executing scripts on remote systems. NRPE is used to monitor "local" resources on remote systems (Linux/Unix). NRPE allows monitoring of Disk Usage, system's load or number of users currently logged in and much more.
                           Basically, NRPE can only monitor public services such as HTTP, FTP, etc., but it also work on client - server basis, you need install a daemon on the server machine ( machine on which you want to monitor), then setup your Nagios server to connect to the remote daemon to collect information about the remote machines. 


NOTE:-

           Using NSClient++ instead of NRPE on the remote host you can execute checks on Windows machines as well.


Figure :- NRPE


DESIGN OVERVIEW:

NRPE addon works in two different stages:
  • The check_nrpe plugin, which works on the local machine.
  • The NRPE daemon, which works on the remote LINUX/UNIX machine.
When Nagios needs to monitor resources of a remote machine:
  • Nagios will execute the check_nrpe plugin and tell it what service needs to be checked.
  • The check_nrpe plugin contact the NRPE daemon using a SSL-protected connection.
  • The NRPE daemon run the Nagios plugin to check the service or resource.
  • NRPE daemon transfer the result of the service check back to the check_nrpe plugin, which further transfer the check result to the Nagios process.


Sunday 24 June 2012

lsof - list of open files


lsof - List open files


lsof is stand for list open files. It list information about any file open by any process. It is linux/unix diagnose tool. It is great, when you are troubleshooting 
an issue and need more information about process or connection details. Linux treats most everything as a file. An open file may be a regular file, a directory, a block special file, a character special file, an executing text reference, a library, a stream or a network file (Internet socket, NFS file or UNIX domain socket.) A specific file or all the files in a file system may be selected by path. When a process or application interacts with these files it has to "open" them. Using this command you can dig into and see what your process is doing.




Show Your Network Connections

Show all connections with -i

lsof -i 
COMMAND  PID USER   FD   TYPE DEVICE SIZE NODE NAME
dhcpcd 6061 root 4u IPv4 4510 UDP *:bootpc
sshd 7703 root 3u IPv6  6499 TCP *:ssh (LISTEN)
sshd 7892 root 3u IPv6  6757 TCP 10.10.1.5:ssh->192.168.1.5:49901 (ESTABLISHED)


Show only TCP (works the same for UDP)

lsof -iTCP 
COMMAND  PID USER   FD   TYPE DEVICE SIZE NODE NAME
sshd 7703 root 3u IPv6 6499 TCP *:ssh (LISTEN)
sshd 7892 root 3u IPv6 6757 TCP 10.10.1.5:ssh->192.168.1.5:49901 (ESTABLISHED)


-i :port shows all networking related to a given port

lsof -i :22 
COMMAND  PID USER   FD   TYPE DEVICE SIZE NODE NAME
sshd 7703 root 3u  IPv6 6499 TCP *:ssh (LISTEN)
sshd 7892 root 3u  IPv6 6757 TCP 10.10.1.5:ssh->192.168.1.5:49901 (ESTABLISHED)

To show connections to a specific host, use @host

lsof -i@192.168.1.5 
sshd 7892 root 3u IPv6 6757 TCP 10.10.1.5:ssh->192.168.1.5:49901 (ESTABLISHED)

Show connections based on the host and the port using @host:port

lsof -i@192.168.1.5:22 
sshd 7892 root 3u IPv6 6757 TCP 10.10.1.5:ssh->192.168.1.5:49901 (ESTABLISHED)


Grepping for "LISTEN" shows what ports your system is waiting for connections on

lsof -i| grep LISTEN 
iTunes     400 daniel   16u  IPv4 0x4575228  0t0 TCP *:daap (LISTEN)


Grepping for "ESTABLISHED" shows current active connections

lsof -i| grep ESTABLISHED 
firefox-b 169 daniel  49u IPv4 0t0 TCP 1.2.3.3:1863->1.2.3.4:http (ESTABLISHED)

Working with Users, Processes, and Files

You can also get information on various users, processes, and files on your system using lsof:

Show what a given user has open using -u

lsof -u daniel 
-- snipped --
Dock 155 daniel  txt REG   14,2   2798436   823208 /usr/lib/libicucore.A.dylib
Dock 155 daniel  txt REG   14,2   1580212   823126 /usr/lib/libobjc.A.dylib
Dock 155 daniel  txt REG   14,2   2934184   823498 /usr/lib/libstdc++.6.0.4.dylib
Dock 155 daniel  txt REG   14,2    132008   823505 /usr/lib/libgcc_s.1.dylib
Dock 155 daniel  txt REG   14,2    212160   823214 /usr/lib/libauto.dylib
-- snipped --


See what files and network connections a command is using with -c

lsof -c syslog-ng 
COMMAND    PID USER   FD   TYPE     DEVICE    SIZE       NODE NAME
syslog-ng 7547 root  cwd    DIR    3,3    4096   2 /
syslog-ng 7547 root  rtd    DIR    3,3    4096   2 /
syslog-ng 7547 root  txt    REG    3,3  113524  1064970 /usr/sbin/syslog-ng
syslog-ng 7547 root  mem    REG    0,0   0 [heap] 
syslog-ng 7547 root  mem    REG    3,3  105435   850412 /lib/libpthread-2.4.so
syslog-ng 7547 root  mem    REG    3,3 1197180   850396 /lib/libc-2.4.so
syslog-ng 7547 root  mem    REG    3,3   59868   850413 /lib/libresolv-2.4.so
syslog-ng 7547 root  mem    REG    3,3   72784   850404 /lib/libnsl-2.4.so
syslog-ng 7547 root  mem    REG    3,3   32040   850414 /lib/librt-2.4.so
syslog-ng 7547 root  mem    REG    3,3  126163   850385 /lib/ld-2.4.so
-- snipped --


Pointing to a file shows what's interacting with that file

lsof /var/log/messages 
COMMAND    PID USER   FD   TYPE DEVICE   SIZE   NODE NAME
syslog-ng 7547 root    4w   REG    3,3 217309 834024 /var/log/messages


The -p switch lets you see what a given process ID has open, which is good for learning more about unknown processes

lsof -p 10075 
-- snipped --
sshd    10068 root  mem    REG    3,3   34808 850407 /lib/libnss_files-2.4.so
sshd    10068 root  mem    REG    3,3   34924 850409 /lib/libnss_nis-2.4.so
sshd    10068 root  mem    REG    3,3   26596 850405 /lib/libnss_compat-2.4.so
sshd    10068 root  mem    REG    3,3  200152 509940 /usr/lib/libssl.so.0.9.7
sshd    10068 root  mem    REG    3,3   46216 510014 /usr/lib/liblber-2.3
sshd    10068 root  mem    REG    3,3   59868 850413 /lib/libresolv-2.4.so
sshd    10068 root  mem    REG    3,3 1197180 850396 /lib/libc-2.4.so
sshd    10068 root  mem    REG    3,3   22168 850398 /lib/libcrypt-2.4.so
sshd    10068 root  mem    REG    3,3   72784 850404 /lib/libnsl-2.4.so
sshd    10068 root  mem    REG    3,3   70632 850417 /lib/libz.so.1.2.3
sshd    10068 root  mem    REG    3,3    9992 850416 /lib/libutil-2.4.so
-- snipped --


The -t option returns just a PID

lsof -t -c Mail 
350
ps aux | grep Mail
daniel 350 0.0 1.5 405980 31452 ?? S  Mon07PM 2:50.28 /Applications/Mail.app

Advanced Usage



Using-a allows you to combine search terms, so the query below says, "show me everything running as daniel connected to 1.1.1.1"

lsof -a -u daniel -i @1.1.1.1 
bkdr   1893 daniel 3u  IPv6 3456 TCP 10.10.1.10:1234->1.1.1.1:31337 (ESTABLISHED)


Using the -t and -c options together you can HUP processes

kill -HUP `lsof -t -c sshd`


You can also use the -t with -u to kill everything a user has open

kill -9 `lsof -t -u daniel`


lsof +L1 shows you all open files that have a link count less than 1, often indicative of a cracker trying to hide something

lsof +L1 
(hopefully nothing)







Thursday 21 June 2012

DOS and D-DOS Attack

The Difference Between a DoS and a DDoS


Dos and DDos Attacks both are almost similar in listening, but there is huge difference between both of them.

DOS Attack:- Dos stand for Daniel Of Service, in this type of attack a single machine may be directed to target a specific server, to a specific port or service on a targeted system, to a network or network component, to a firewall or to any other system component, to flood a server with packets (TCP / UDP) – The Objective of this attack is to ‘overload’ the servers bandwidth / other resources. This led to server inaccessible to others, which results in blocking the website. 


DDOS Attack:- A DDOS Attack is Distributed Daniel Of Service Attack, it is much similar to DOS Attack but the result are much dangerous/different than of the DOS attack. DDos attack occur from more than one source, from more than one location, or more than one internet connection,at the same time. It is an attack where multiple compromised system (which are usually infected with Virus, Malware, Trojan) are used to target a single server. Computers behind such attack are distributed around the world, and a part of botnet.

The main objective of both the attacks areto overload the server by  hundreds and thousands of hitting or requests. This cause the over usage of the server resources (such as MEMORY, CPU, etc.), and it is hard for the server to withstand a DDOS attack.

Wednesday 20 June 2012

AWK tutorial

Awk Introduction and Printing Operations


awk is a powful Unix command. It allows the user to manipulate files that are structured as columns of data and strings. Once you understand the basics of awk you will find that it is surprisingly useful. awk stands for the names of its authors “Aho, Weinberger, and Kernighan”
The Awk is mostly used for pattern scanning and processing. It searches one or more files to see if they contain lines that matches with the specified patterns and then perform associated actions.
Some of the key features of Awk are:
  • Awk views a text file as records and fields.
  • Like common programming language, Awk has variables, conditionals and loops
  • Awk has arithmetic and string operators.
  • Awk can generate formatted reports
Awk reads from a file or from its standard input, and outputs to its standard output. Awk does not get along with non-text files.


The basic syntax of AWK:
awk 'BEGIN {start_action} {action} END {stop_action}' filename


Here the actions in the begin block are performed before processing the file and the actions in the end block are performed after processing the file. The rest of the actions are performed while processing the file.

Syntax:

awk '/search pattern1/ {Actions}
     /search pattern2/ {Actions}' file
In the above awk syntax:
  • search pattern is a regular expression.
  • Actions – statement(s) to be performed.
  • several patterns and actions are possible in Awk.
  • file – Input file.
  • Single quotes around program is to avoid shell not to interpret any of its special characters.

Awk Working Methodology

  1. Awk reads the input files one line at a time.
  2. For each line, it matches with given pattern in the given order, if matches performs the corresponding action.
  3. If no pattern matches, no action will be performed.
  4. In the above syntax, either search pattern or action are optional, But not both.
  5. If the search pattern is not given, then Awk performs the given actions for each line of the input.
  6. If the action is not given, print all that lines that matches with the given patterns which is the default action.
  7. Empty braces with out any action does nothing. It wont perform default printing operation.
  8. Each statement in Actions should be delimited by semicolon.

Awk Examples

Create a file example with the following data. This file can be easily created using the output of ls -l.
-rw-r--r-- 1 owner owner 12 Jun  8 21:39 p1
-rw-r--r-- 1 owner owner 17 Jun  8 21:15 t1
-rw-r--r-- 1 owner owner 26 Jun  8 21:38 t2
-rw-r--r-- 1 owner owner 25 Jun  8 21:38 t3
-rw-r--r-- 1 owner owner 43 Jun  8 21:39 t4
-rw-r--r-- 1 owner owner 48 Jun  8 21:39 t5

This include rows and columns. Each column is separated with a single space and each row is separated with new line character. We will use this file as the input for the example discussed here. 

1. awk '{print $1}' example

Here $1 has a meaning. $1, $2, $3... represents the first, second, third columns... in a row respectively. This awk command will print the first column in each row as shown below.
-rw-r--r--
-rw-r--r--
-rw-r--r--
-rw-r--r--
-rw-r--r--
-rw-r--r--

To print the 4th and 6th columns in a file use awk '{print $4,$5}' example

Here the Begin and End blocks are not used in awk. So, the print command will be executed for each row it read from the file.


2. awk '{ if($9 == "t4") print $0;}' example


This awk command checks for the string "t4" in the 9th column and if it finds a match then it will print the entire line. The output of this awk command is


-rw-r--r-- 1 owner owner 43 Jun 8 21:39 t4