linux poison RSS
linux poison Email

HowTo Find out if Installed Linux is 32 bits or 64 bits

To figure out  whether we have installed Linux is of 32 bits or 64 bits, you can run the following command in a Terminal:
uname -m
The possible outcomes and their meanings are:

    * i686       : Installed 32-bit version.
    * x86_64 : Installed 64-bit version.
Read more

How to Disable ICMP Redirects in Linux

ICMP redirect messages represent one of the lowest level routing protocols for simple redirects. Routers send them to hosts to inform them about more efficient ways to reach a host rather than route them over themselves. A host will allow this message and will store this "hint" in a temporary table. It will send the next package directly to the host given by the ICMP redirect.

However, the Linux kernel doesn't listen to ICMP redirects when it is configured as router and not as host. So, be careful setting up routing tables for routers. They have to be complete. No hints from other hosts will be accepted and only the local routing table decides where packages go.

Disable ICMP Redirects:
In most of the Linux flavors (Debian, Ubuntu, Redhat Enterprise linux, OpenSuSe) ICMP Redirects can be dynamically disabled on the host by adding the proper entries in the /etc/sysctl.conf configuration file. Simply edit the /etc/sysctl.conf file and add the following entries:

For IPv4
net.ipv4.conf.all.accept_redirects = 1
net.ipv4.conf.all.send_redirects = 1
For Ipv6
net.ipv6.conf.all.accept_redirects = 1
net.ipv6.conf.all.send_redirects = 1
Read more

Ratproxy - A Passive Web Application Security Assessment Tool

Ratproxy is a semi-automated, largely passive web application security audit tool, optimized for an accurate and sensitive detection, and automatic annotation, of potential problems and security-relevant design patterns based on the observation of existing, user-initiated traffic in complex web 2.0 environments.

Ratproxy also detects and prioritizes broad classes of security problems, such as dynamic cross-site trust model considerations, script inclusion issues, content serving problems, insufficient XSRF and XSS defenses, and much more.

Ratproxy is a local program designed to sit between your Web browser and the application you want to test. It logs outgoing requests and responses from the application, and can generate its own modified transactions to determine how an application responds to common attacks.

    The list of low-level tests it runs is extensive, and includes:
    potentially unsafe JSON-like responses
    bad caching headers on sensitive content
    suspicious cross-domain trust relationships
    queries with insufficient XSRF defenses
    suspected or confirmed XSS and data injection vectors
    OpenSuSe user can install Ratproxy using "1-click" installer - here

    Running  Ratproxy:
    # ratproxy -v /tmp/ -w ratlog.txt -d -lfscm
    ratproxy version 1.58-beta by
    [*] Proxy configured successfully. Have fun, and please do not be evil.
    [+] Accepting connections on port 8080/tcp (local only)...

    -d parameter tells ratproxy to run tests only on URLs at the specified domain, so it won't accidentally test a site your application links to for images or advertising.
    -v parameter tells it where to write trace files.
    -w parameter indicates where log records should be written.

    Once the proxy is running, you need to configure your web browser to point to the appropriate machine and port (8080) it is advisable to close any non-essential browser windows and purge browser cache, as to maximize coverage and minimize noise.

    The next step is to open the tested service in your browser, log in if necessary, then interact with it in a regular, reasonably exhaustive manner: try all available views, features, upload and download files, add and delete data, and so forth - then log out gracefully and terminate ratproxy with Ctrl-C

    Generating  Ratproxy Report:

    Once the proxy is terminated, you may further process its pipe-delimited (|), machine-readable, greppable output with third party tools if so desired, then generate a human-readable HTML report:
    # ratproxy.log >report.html
    This will produce an annotated, prioritized report with all the identified issues.

    Read more

    Etherape - A Graphical Network Monitor for Linux

    EtherApe is a graphical network monitor for Unix modeled after etherman. Featuring link layer, ip and TCP modes, it displays network activity graphically. Hosts and links change in size with traffic. Color coded protocols display.

    EtherApe supports Ethernet, FDDI, Token Ring, ISDN, PPP and SLIP devices. It can filter traffic to be shown, and can read traffic from a file as well as live from the network.

    EtherApe Feature:
    • Network traffic is displayed graphically. The more "talkative" a node is, the bigger its representation.
    • Node and link color shows the most used protocol.
    • User may select what level of the protocol stack to concentrate on.
    • You may either look at traffic within your network, end to end IP, or even port to port TCP.
    • Data can be captured "off the wire" from a live network connection, or read from a tcpdump capture file.
    • Live data can be read from ethernet, FDDI, PPP, SLIP and WLAN interfaces, plus several other encapsulated formats (e.g. Linux cooked, PPI).
    • The following frame and packet types are currently supported: ETH_II, 802.2, 803.3, IP, IPv6, ARP, X25L3, REVARP, ATALK, AARP, IPX, VINES, TRAIN, LOOP, VLAN, ICMP, IGMP, GGP, IPIP, TCP, EGP, PUP, UDP, IDP, TP, ROUTING, RSVP, GRE, ESP, AH, EON, VINES, EIGRP, OSPF, ENCAP, PIM, IPCOMP, VRRP; and most TCP and UDP services, like TELNET, FTP, HTTP, POP3, NNTP, NETBIOS, IRC, DOMAIN, SNMP, etc.
    • Data display can be refined using a network filter using pcap syntax.
    • Display averaging and node persistence times are fully configurable.
    • Name resolution is done using standard libc functions, thus supporting DNS, hosts file, etc.
    • Clicking on a node/link opens a detail dialog showing protocol breakdown and other traffic statistics.
    • Protocol summary dialog shows global traffic statistics by protocol.
    • Node summary dialog shows traffic statistics by node.
    • Scrollkeeper/rarian-compatible manual integrated with yelp.
    EtherApe Installation:
    OpenSuSe user can install EtherApe using "1-click" installer - here
    Ubuntu user can install EtherApe using command: sudo aptitude install etherape

    After successful installation, go to terminal and type command : etherape
    When you fire up EtherApe, you see a dynamic Web of traffic (shown below in picture)

    Read more

    How to set MTU (Maximum Transmission Unit) value on OpenSuSe Linux

    In computer networking, the maximum transmission unit (MTU) of a layer of a communications protocol is the size (in bytes) of the largest protocol data unit that the layer can pass onwards.

    MTU parameters usually appear in association with a communications interface (NIC, serial port, etc.). Standards (Ethernet, for example) can fix the size of an MTU; or systems (such as point-to-point serial links) may decide MTU at connect time. A higher MTU brings greater efficiency because:
    • each packet carries more user-data
    • protocol overheads, such as headers or underlying per-packet delays, remain fixed,
    • higher efficiency means a slight improvement in bulk protocol throughput

    For sending bulk data, the Internet generally works better when using larger packets. Each packet implies a routing decision, when sending a 1 megabyte file, this can either mean around 700 packets when using packets that are as large as possible, or 4000 if using the smallest default.

    However, not all parts of the Internet support full 1460 bytes of payload per packet. It is therefore necessary to try and find the largest packet that will 'fit', in order to optimize a connection.

    This process is called 'Path MTU Discovery', where MTU stands for 'Maximum Transfer Unit.'

    Setting MTU value in OpenSuSe Linux:
    Go to Yast → Network Device → Network Settings
    In Network Setting dialog dialog box, select "Overview" tab and click on "Edit" button, this will open up your Network card setting dialog box, click on "General" tab to set the MTU for this card as shown in the figure below

    To view the updated MTU, type command: ip link list you should see the output similar to ...
    1: lo: mtu 16436 qdisc noqueue state UNKNOWN
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    2: eth0: mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
        link/ether 00:19:d1:18:ba:a6 brd ff:ff:ff:ff:ff:ff
    Read more

    pngcrush - Optimize PNG file (image) to Speed up WebSite

    Pngcrush is an excellent batch-mode compression utility for PNG images. Depending on the application that created the original PNGs, it can improve the file size anywhere from a few percent to 40% or more (completely losslessly). The utility also allows specified PNG chunks (e.g. text comments) to be inserted or deleted, and it can fix incorrect gamma info.

    Pngcrush is not installed by default in Linux, but it is available in the repositories of most Linux distributions. In Ubuntu Linux, you can install pngcrush by running the following command :

    $ sudo apt-get install pngcrush

    OpenSuSe user can install Pngcrush using "1-click" installer - here

    Using pngcrush
    Running the Pngcrush command with no options may produce smaller files. Here is how we can use it to optimize and reduce size of PNG files -

    $ pngcrush -brute -e "Optimize.png" filename.png

    -brute – Use brute force, try 114 different filter/compression methods. This option is very time consuming but will able to reduce the size of PNG images by significant factor using this option.

    -e "ext" – Specify a new extension for all output files. In the above example, the output of the command will be a file named filename Optimize.png . You can use any extension you want. This option ensures that the original file is not overwritten.

    More examples
    $ pngcrush -brute -d "/home/laf/images/pngcrush/"  *.png

    All the files in the current directory are optimized and saved in the directory denoted by the -d option.
    Read more

    shred - Securely delete files in Linux

    In case you want to delete some confidential data from your computer just to make sure that it is no longer accessible to anyone, then do not delete the file using the regular rm command because there will still remain a chance that someone might use a software to recover your deleted data before the specific storage area is overwritten by new data. The proper way to permanently dispose of such data in Linux is the shred command.

    * NOTE *
    In  the case of ext3/ext4 file systems, in  both  the  data=ordered (default) and data=writeback modes, shred works as usual.Ext3/Ext4 journaling modes can be changed by adding the data=something option to the mount options for a particular file system in the /etc/fstab file.

    Most Linux distribution come with shred already installed. If not, they are one click away in the repositories. shred is already included, in the multipurpose package called coreutils, which includes tens of utilities.

    Using shred:
    shred is a command utility. It can be run against files or folders, with certain flags.
    # shred -f -u -v -z filename
    -f change permissions to allow writing if necessary
    -u truncate and remove file after overwriting
    -v be verbose (detailed) and show progress
    -z add a final overwrite with zeros to hide shredding

    shred might not work on bad sectors, it is one of the best tools available to securely erase data from your hard disk. It is always more secure to run shred on a complete partition rather than a file, because some filesystems keep backup files and shred makes no attempt to delete these.
    # shred /dev/hda2
    Read more

    How to Synchronize System Clock with NTP Time Server on OpenSuSe Linux

    Networked computers share resources such as files. These shared resources often have time-stamps associated with them so it is important that computers communicating over networks, including the Internet, are synchronized. Imagine the confusion that could be created if an email appeared to arrive before being sent, or if an important file, modified in offices in different countries, had a version with a time-stamp indicating it had been created two hours later than its updated version.

    The Network Time Protocol (NTP) is an Internet Standard Recommended Protocol for communicating the Coordinated Universal Time (UTC) from special servers called time servers and synchronizing computer clocks on an IP network.

    Time Servers, or time source references, communicate with special time keeping equipment such as Global Positioning System (GPS) receivers, atomic clocks, radios, satellite receivers or modems.

    The accuracy of each time server is defined by a number called its stratum. Stratum one servers are those at the top level which communicate directly with a time source such as a GPS or an atomic clock. Each level downwards in the hierarchy is classified as one greater than the preceding level (for example, stratum two, stratum three). Current stratum one servers can provide time within a millisecond's accuracy or better.

    For a list of available Stratum 1 and 2 servers consult
    You can get the list of the NTP server based on your region (country) -

    OpenSuSe user can install NTP using "1-click" installer - here

    Configuring NTP
    The ntp.conf' file is main source of configuration information for a NTP server installation. Amongst other things, it contains a list of reference clocks that the installation is to synchronise. A list of NTP server references is specified with the server' configuration command thus or the same thing can be configure using YAST control center:

    Go to Yast → System → Date and Time, click on "Change" button to open up the NTP configuration dialog box.

    Provide the name of the appropriate time server located in your region (country) and click on "Accept" save the confguration in your ntp.conf'

    Now, start your ntp daemon (/etc/init.d/ntp start) to make sure that your system clock keep getting  synchronized with the specified time server.
    Read more

    Measuring the Performance of HTTP Web Servers using ApacheBench (ab)

    ApacheBench is a command line utility for measuring the performance of HTTP web servers, in particular the Apache HTTP Server. It was designed to give an idea of the performance that a given Apache installation can provide. In particular, it shows how many requests per second the server is capable of serving.

    The ab tool comes bundled with the standard Apache source distribution, and like the Apache web server itself, is free, open source software and distributed under the terms of the Apache.

    Using ApacheBench (ab):
    Suppose we want to see how fast Yahoo can handle 100 requests, with a maximum of 10 requests running concurrently:

    ab -n 100 -c 10

    As you can see this is very useful information, it returned requests at a rate of 16.50 requests per second, the fastest request was 556ms, the slowest 830ms.

    Suppose you wanted to test multiple url's concurrently as well? You can do this by creating a shell script, with multiple ab calls. At the end of each line place an & this makes the command run in the background, and lets the next command start execution. You will also want to redirect the output to a file for each url using > filename For example:

    ab -n 100 -c 10 > test1.txt &
    ab -n 100 -c 10  > test2.txt &
    Read more

    MySQLTuner - Performance Tunning MySQL on Linux

    MySQLTuner is a script written in Perl that allows you to review a MySQL installation quickly and make adjustments to increase performance and stability.  The current configuration variables and status data is retrieved and presented in a brief format along with some basic performance suggestions.

    It is extremely important for you to fully understand each change you make to a MySQL database server.  If you don't understand portions of the script's output, or if you don't understand the recommendations, you should consult a knowledgeable DBA or even a system administrator that you trust. Always test your changes on staging environments, and always keep in mind that improvements in one area can negatively affect MySQL in other areas.

    Running MySQLTuner:
    Download MySQLTuner using the following command
    # wget
    To run the script, simply make it executable and run it:
    # chmod +x
    # ./
    Enter your administrative username and password
    Output you can see as follows

    Read more

    How to Configure Apache as a Forward / Reverse Proxy

    Apache can be configured in both a forward and reverse proxy (also known as gateway) mode.

    Apache as Forward Proxy:
    An ordinary forward proxy is an intermediate server that sits between the client and the origin server. In order to get content from the origin server, the client sends a request to the proxy naming the origin server as the target and the proxy then requests the content from the origin server and returns it to the client. The client must be specially configured to use the forward proxy to access other sites.

    A typical usage of a forward proxy is to provide Internet access to internal clients that are otherwise restricted by a firewall. The forward proxy can also use caching (mod_cache) to reduce network usage.

    The forward proxy is activated using the ProxyRequests directive. Because forward proxies allow clients to access arbitrary sites through your server and to hide their true origin, it is essential that you secure your server so that only authorized clients can access the proxy before activating a forward proxy.
    ProxyRequests On
    ProxyVia On

    <Proxy *>
    Order deny,allow
    Deny from all
    Allow from 192.168.1
    Apache as Reverse Proxy:
    A reverse proxy (or gateway), by contrast, appears to the client just like an ordinary web server. No special configuration on the client is necessary. The client makes ordinary requests for content the reverse proxy then decides where to send those requests, and returns the content as if it was itself the origin.

    A typical usage of a reverse proxy is to provide Internet users access to a server that is behind a firewall. Reverse proxies can also be used to balance load among several back-end servers, or to provide caching for a slower back-end server. In addition, reverse proxies can be used simply to bring several servers into the same URL space.

    A reverse proxy is activated using the ProxyPass directive or the flag to the RewriteRule directive. It is not necessary to turn ProxyRequests on in order to configure a reverse proxy.
    ProxyRequests Off

    <Proxy *>
    Order deny,allow
    Allow from all

    ProxyPass /foo
    ProxyPassReverse /foo
    Read more

    How to Add a Custom New Service under xinetd

    xinetd, the eXtended InterNET Daemon, is an open-source super-server daemon which runs on many Unix-like systems and manages Internet-based connectivity. It offers a more secure extension to or version of inetd, the Internet daemon.

    xinetd features access control mechanisms such as TCP Wrapper ACLs, extensive logging capabilities, and the ability to make services available based on time. It can place limits on the number of servers that the system can start, and has deployable defence mechanisms to protect against port scanners, among other things.

    Create a new configuration file in /etc/xinetd.d with at least the following information:

    service SERVICE_NAME                         # Name from /etc/services;
            server = /PATH/TO/SERVER             # The service executable
            server_args = ANY_ARGS_HERE    # Any arguments; omit if none
            user = USER                                        # Run the service as this user
            socket_type = TYPE                           # stream, dgram, raw, or seqpacket
            wait = YES/NO                                    # yes = single-threaded, no = multithreaded

    Name the file SERVICE_NAME. Then restart  xinetd to read your new service file.

    On starting again, xinetd reads all files in /etc/xinetd.d only if /etc/xinetd.conf tells it to, via this line:
    includedir /etc/xinetd.d
    Check your /etc/xinetd.conf to confirm the location of its includedir.
    Read more

    Collectd – Linux System Statistics Collection Daemon

    collectd is a small and modular daemon written in C for performance which collects system information periodically and provides means to store the values. Included in the distribution are numerous plug-ins for collecting CPU, disk, and memory usage, network interface and DNS traffic, network latency, database statistics, and much more. Custom statistics can easily be added in a number of ways, including execution of arbitrary programs and plug-ins written in Perl. Advanced features include a powerful network code to collect statistics for entire setups and SNMP integration to query network equipment.

    OpenSuSe user can install collectd using "1-click" installer - here
    Debian/Ubuntu user can install collectd using command: apt-get install collectd
    For Red Hat, CentOS and Fedora, there are collectd RPM packages in Dag Wieers' repository.

    The configuration will lie in /etc/collectd.conf. Open the file and pay particular attention to the LoadPlugin lines : # vi etc/collectd.conf

    For each plugin, there is a LoadPlugin line in the configuration. Almost all of those lines are commented out in order to keep the default configuration lean.

    There's a wiki page containing a table of all plugins.

    The Interval setting controls how often values are read. You should set this once and then never touch it again.

    If you're done configuring, you need to (re-)start the daemon. If you installed a binary package there should be an init-script somewhere. Under Debian, the command would be:
    # /etc/init.d/collectd restart
    You also need to configure your webserver (apache) to display the rrd graph:
    <IfModule mod_cgi.c>
        ScriptAlias /collectd /srv/www/collectd/collection.cgi
        <Directory "/srv/www/collectd">
            Order allow,deny
            Allow from 192.168.1

    Restart your apache server and browse to http://localhost/collectd

    Read more

    Adeona - Open Source System for tracking the location of your lost or stolen laptop

    Researchers at the University of Washington and the University of California San Diego have unveiled an open source technology that may enable people to recover missing or stolen notebook computers—and, in some cases, maybe even take pictures of the person(s) who stole it.

    Adeona is the first Open Source system for tracking the location of your lost or stolen laptop that does not rely on a proprietary, central service. This means that you can install Adeona on your laptop and go — there's no need to rely on a single third party. What's more, Adeona addresses a critical privacy goal different from existing commercial offerings. It is privacy-preserving. This means that no one besides the owner (or an agent of the owner's choosing) can use Adeona to track a laptop. Unlike other systems, users of Adeona can rest assured that no one can abuse the system in order to track where they use their laptop.

    Adeona is designed to use the Open Source OpenDHT distributed storage service to store location updates sent by a small software client installed on an owner's laptop. The client continually monitors the current location of the laptop, gathering information (such as IP addresses and local network topology) that can be used to identify its current location. The client then uses strong cryptographic mechanisms to not only encrypt the location data, but also ensure that the ciphertexts stored within OpenDHT are anonymous and unlinkable. At the same time, it is easy for an owner to retrieve location information.

    Adeona has three main properties:
    Private: Adeona uses state-of-the-art cryptographic mechanisms to ensure that the owner is the only party that can use the system to reveal the locations visited by a device.

    Reliable: Adeona uses a community-based remote storage facility, ensuring retrievability of recent location updates.

    Open source and free: Adeona's software is licensed under GPLv2. While your locations are secret, the tracking system's design is not.

    OpenSuSe user can use "1-click" installer to install Adeona - here

    Setup / Configure Adeona:
    1. Initialize Adeona and set your personal password:
    # /usr/sbin/adeona-init -r /usr/share/adeona/ -l /var/log/adeona/
    2. Move the generated files into the proper directories:
    # mv adeona-clientstate.cst adeona-retrievecredentials.ost /var/lib/adeona
    IMPORTANT: Please don't forget to make a backup copy of your location-finding credentials: /var/lib/adeona/adeona-retrievecredentials.ost
    4. Start the service and enable it for the preferred run levels.
    # /etc/init.d/adeona start
    How to retrieve the location:
    # /usr/sbin/adeona-retrieve -r /usr/share/adeona/ -l /var/log/adeona/ -s /path/to/your/adeona-retrievecredentials.ost -n 1

    NOTE: Adeona has pseudorandomly scheduled updates and there may not be any location information stored in OpenDHT yet. Please wait about 1 hour before trying to do a retrieval.

    Adeona will work as long as it is allowed connections on port 80 (HTTP) and port 5852 (for OpenDHT). Note that these are also required to be open for retrieval. Additionally, if one wants nearby routers reported, then UDP packets should not be dropped (this allows performing traceroutes).
    Read more

    Baobab - Linux Graphical Disk Usage Analyzer

    Baobab (Disk Usage Analyzer) is is a graphical, menu-driven application to analyze disk usage in any Gnome environment. Disk Usage Analyzer can easily scan either the whole file-system tree, or a specific user-requested directory branch (local or remote).

    Baobab also auto-detects in real-time any changes made to your home directory as far as any mounted/unmounted device. Baobab also provides a full graphical tree-map window and a rings-chart for each selected folder.

    With Baobab you can check the space each directory or sub-directory is using on you file system, the information is presented in absolute values as well as percentages, you can go as deep as you need in your directory structure.

    There is also the possibility to scan a network disk, via Public FTP, FTP with Login, SSH, Windows Share, HTTP, HTTPS.

    OpenSuSe user can install Baobab using "1-click" installer - here
    After sucessfull installation go to terminal and type command: baobab to open up the application

    Using  Baobab:
    To start a full filesystem scan select Analyzer → Scan Filesystem from the menu, or press on the Scan Filesystem toolbar button.

    When the scanning process ends up, you will get the full tree of your filesystem, like the one in the next Figure.

    Baobab will display sizes in the directory tree as allocated space. This means that the displayed sizes refer to the actual disk usage and not to the apparent directory size. If you want to view the apparent file size, uncheck View → Allocated Space.

    Remote scan
    If you need to scan a remote server-folder, just click on the toolbar icon Scan Remote Folder or select Analyzer → Scan Remote Folder from the menu and you will get the following dialog box. Baobab can connect to a server through ssh, ftp, smb, http and https.
    Read more

    A brief history of Debian Linux

    Debian was founded in 1993 by Ian Murdock, then a student at Purdue University, who wrote the Debian Manifesto which called for the creation of a Linux distribution to be maintained in an open manner, in the spirit of Linux and GNU. He chose the name by combining the first name of his then-girlfriend (now wife) Debra with his own first name "Ian", forming the portmanteau "Debian", pronounced as debian (deb-e′-en).

    The Debian Project grew slowly at first and released its first 0.9x versions in 1994 and 1995. The first ports to other architectures were begun in 1995, and the first 1.x version of Debian was released in 1996. In 1996, Bruce Perens replaced Ian Murdock as the project leader. At the suggestion of fellow developer Ean Schuessler, he guided the editing process of the Debian Social Contract and the Debian Free Software Guidelines, defining fundamental commitments for the development of the distribution. He also initiated the creation of the legal umbrella organization Software in the Public Interest.

    Bruce Perens left in 1998 before the release of the first glibc-based Debian, 2.0. The Project proceeded to elect new leaders and made two more 2.x releases, each including more ports and more packages. APT was deployed during this time and the first port to a non-Linux kernel, Debian GNU/Hurd, was started as well. The first Linux distributions based on Debian, Corel Linux and Stormix's Storm Linux, were started in 1999. Though no longer developed, these distributions were the first of many distributions based on Debian.

    In late 2000, the Project made major changes to archive and release management, reorganizing software archive processes with new "package pools" and creating a testing branch as an ongoing, relatively stable staging area for the next release. In 2001, developers began holding an annual conference called Debconf with talks and workshops for developers and technical users.
    Read more

    Understanding Partition / Filesystem Mount Options (fstab file)

    Here is a sample /etc/fstab entry

    /dev/cdrom   /media/cdrom   auto ro,noauto,user,exec   0   0

    The fourth column in fstab lists all the mount options for the device or partition. This is also the most confusing column in the fstab file, but knowing what some of the most common options mean, saves you from a big headache. For more information, check out the man page of mount.

    auto and noauto With the auto option, the device will be mounted automatically (at bootup). auto is the default option. If you don't want the device to be mounted automatically, use the noauto option in /etc/fstab. With noauto, the device can be mounted only explicitly.

    user and nouser These are very useful options. The user option allows normal users to mount the device, whereas nouser lets only the root to mount the device. nouser is the default. If you're not able to mount your cdrom, floppy, Windows partition, or something else as a normal user, add the user option into /etc/fstab.

    exec and noexec exec lets you execute binaries that are on that partition, whereas noexec doesn't let you do that. noexec might be useful for a partition that contains binaries you don't want to execute on your system, or that can't even be executed on your system. This might be the case of a Windows partition.

    ro Mount the filesystem read-only.

    rw Mount the filesystem read-write.

    sync and async sync means it's done synchronously, this means that when you, for example, copy a file to the partition, the changes are physically written to the partition at the same time you issue the copy command.

    However, if you have the async option in /etc/fstab, input and output is done asynchronously. Now when you copy a file to the partition, the changes may be physically written to it long time after issuing the command. async is default.

    defaults Uses the default options that are rw, suid, dev, exec, auto, nouser, and async.
    Read more

    Block Ads / Malware / Spyware using hosts file (Windows / Linux)

    The most easy way to block advertisements and other malware sites is to point those site to ip address or (however the zero version may not be compatible with all systems) in your hosts file.

    There are sites which provides the list of the sites names which are responsible to displaying the ads or spreading the malware so, download the list from here and put it into your hosts file and restart your system.

    For Windows 9x and ME place this file at "C:\Windows\hosts"
    For NT, Win2K and XP use "C:\windows\system32\drivers\etc\hosts" or "C:\winnt\system32\drivers\etc\hosts"
    For Linux, Unix, or OS X place this file at "/etc/hosts".

    This is an easy and effective way to protect you from  many types of spy-ware, reduces bandwidth use, blocks certain pop-up traps, prevents user tracking by way of "web bugs" embedded in Spam, provides partial protection to browsers from certain web-based exploits and blocks most advertising you would otherwise be subjected to on the Internet.

    One of the advantages from this modification is that your web surfing experience would be significantly faster because your browser does not have to wait for the annoying advertisement to load.

    There are other effective way to kill advertisement - here
    Read more

    Woof - Simple Web-based File Sharing

    Transferring files from one computer to another on a network isn't always a straightforward task. Equipping networks with a file server or FTP server or common web server is one way to simplify the process of exchanging files, but if you need a simpler yet efficient method, try Woof -- short for Web Offer One File. It's a small Python script that facilitates transfer of files across networks and only requires the recipient of the files have a Web browser.

    Woof tries a different approach. It assumes that everybody has a web-browser or a command-line web-client installed. Woof is a small simple stupid web-server that can easily be invoked on a single file. Your partner can access the file with tools he trusts (e.g. wget). No need to enter passwords on keyboards where you don't know about keyboard sniffers, no need to start a huge lot of infrastructure, just do a
    $ woof filename
    and tell the recipient the URL woof spits out. When he got that file, woof will quit and everything is done. And when someone wants to send you a file, woof has a switch to offer itself, so he can get woof and offer a file to you.

    OpenSuSe user can install Woof using "1-click" installer - here

    Using Woof is really very simple, provide a valid ip address (your machine ip), port and filename that you wanted to share with other. Once it's up and running, Woof will write out a URL that the recipient can use. Example shown below ..

    $ woof -i -p 8888 testfile.txt
    Now serving on

    As you can see that we have specifyed our IP address and the port with the -i and -p options and followed by the file we need to transfer.

    Here you can also specify directory that need to transfer, When a directory is specified, an tar archive gets served. By default it is gzip compressed.

    Once the file has been downloaded, Woof quits and prints an entry in the common log format that looks like: - - [03/Jan/2010 14:04:25] "GET /testfile.txt HTTP/1.0" 200 -

    There is another option -c where you can specify the total number of times a file can be downloaded by any recipient. By default Woof sets this count to 1. In the example below, we set the count to 2 and Woof exits when the file has been downloaded twice, printing two log entries.

    $ woof -i -p 8888 -c 2 testfile.txt
    Now serving on - - [03/Jan/2010 16:09:45] "GET /testfile.txt HTTP/1.0" 200 - - - [03/Jan/2010 16:09:49] "GET /testfile.txt HTTP/1.0" 200 -

    To make thing even more simple create a .woofrc file in your home directory with following content
    ip =
    port = 8888
    count = 2
    compressed = gz
    Now with the help of this file you are not required to pass all those parameter, woof will pick up all the required parameter from this file.

    $ woof testfile.txt
    Read more

    googsystray - System Tray Notifications for various Google Services

    Googsystray is a system tray app for Google Voice, GMail, Google Calendar, Google Reader, and Google Wave. It lets you keep track of the information provided by those services without having to keep a bunch of Web pages open or constantly checking them. It notifies you of new events, such as messages or alerts, and provides basic services quickly.

    OpenSuSe user can install Googsystray using "1-click" installer - here

    To install Googsystray in some other Linux distro, firstly make sure you have Python and pygtk installed, then download the .tar.gz archive and extract it. Open a terminal, navigate to the folder where you have extracted Googsystray, and run the following command:
    sudo python ./ install
    Then, go to the bin subfolder from your googsystray folder, and double click the googsystray file.
    To configure Googsystray, right click it's icon on your systray and select "Preferences":

    To open an unread message, left click the service icon (Gmail, etc) in your systray (notification area).
    Read more

    Recursively Encrypt / Decrypt Directories using gpgdir on Linux

    gpgdir is a script that encrypts and decrypts directories using a GPG key. It supports recursively descending through a directory in order to make sure it encrypts or decrypts every file in a directory and all of its subdirectories.

    All file mtime and atime values are preserved across encryption and decryption operations. In addition, gpgdir is careful not to encrypt hidden files and directories.

    Other features include the ability to interface with the wipe program for secure file deletion, and the ability to obfuscation the original filenames within encrypted directories.

    OpenSuSe user can install gpgdir using "1-click" installer - here

    Others: The distribution tarball of gpgdir contains an file which installs gpgdir along with all the required modules if they are not already available on your system. If you run as a non-root user then the modules and gpgdir will be installed into a subdirectory of your home directory.

    The first time you attempt to use gpgdir gpgdir will setup a configuration file $HOME/.gpgdirrc and inform you that you must edit that file and specify the default key to use for encryption with the use_key directive.
    nikesh@poison:~> gpgdir
    [+] Creating gpgdir rc file: /home/nikesh/.gpgdirrc
    [*] Please edit /home/nikesh/.gpgdirrc to include your gpg key identifier,
        or use the default GnuPG key defined in ~/.gnupg/options.  Exiting.
    Type command "gpg --list-keys" to get the list of keys installed on your system.

    Once you've specified a default encryption key, the usage of gpgdir is simple. Pass -e to gpgdir encrypt a directory tree and -d to decrypt it again, example shown below ...

    To encrypt a directory, and use the wipe command to securely delete the original unencrypted files:
    $ gpgdir -W -e /some/dir
    To encrypt a directory and no subdirectories:
    $ gpgdir -e /some/dir --no-recurse
    Read more

    SWFTools - SWF Manipulation and Creation tools under Linux

    SWFTools is a collection of utilities for working with Adobe Flash files (SWF files). The tool collection includes programs for reading SWF files, combining them, and creating them from other content (like images, sound files, videos or sourcecode). SWFTools is released under the GPL. The current collection is comprised of the programs detailed below:

    * PDF2SWF - A PDF to SWF Converter. Generates one frame per page. Enables you to have fully formatted text, including tables, formulas etc. inside your Flash Movie.

    * SWFCombine - A tool for inserting SWFs into Wrapper SWFs. (Templates) E.g. for including the pdf2swf SWFs in some sort of Browsing-SWF.

    * SWFStrings - Scans SWFs for text data.

    * SWFDump - Prints out various informations about SWFs.

    * JPEG2SWF - Takes one or more JPEG pictures and generates a SWF slideshow.

    * PNG2SWF - Like JPEG2SWF, only for PNGs.

    * GIF2SWF - Converts GIFs to SWF. Also able to handle animated gifs.

    * WAV2SWF - Converts WAV audio files to SWFs, using the L.A.M.E. MP3 encoder library.

    * Font2SWF - Converts font files (TTF, Type1) to SWF.

    * SWFBBox - Allows to readjust SWF bounding boxes.

    * SWFC - A tool for creating SWF files from simple script files.

    * SWFExtract - Allows to extract Movieclips, Sounds, Images etc. from SWF files.

    * RFXSWF Library A fully featured library which can be used for standalone SWF generation. Includes support for Bitmaps, Buttons, Shapes, Text, Fonts, Sound etc. It also has support for ActionScript using the Ming ActionCompiler.

    OpenSuSe user can install SWFTools using "1-click" installer - here
    Ubuntu user can download the .deb file from here and install SWFTools using command:
    $ sudo dpkg -i swftools_0.9.0-0ubuntu1_i386.deb

    Extract images/sounds from avatar.swf using swfextract
    First list all extractable items, type the following command: swfextract avatar.swf
    The result is something like:
    Objects in file avatar.swf:
     [-i] 3 Shapes: ID(s) 7, 12, 14
     [-i] 6 MovieClips: ID(s) 3, 6, 8, 10, 13, 15
     [-j] 1 JPEG: ID(s) 11
     [-p] 1 PNG: ID(s) 318
     [-s] 3 Sounds: ID(s) 28-30
     [-f] 1 Frame: ID(s) 0
    Now you can extract a shape using: swfextract -i 12 avatar.swf -o shape.swf
    sound using:  swfextract -s 28 avatar.swf -o sound.wav
    PNG image file using:  swfextract -p 318 avatar.swf -o file.png
    Read more

    UNP : Universal File Unpacking Utility for Ubuntu / Debian / OpenSuSe / Fedora

    Unp is a small perl script which makes extraction of any archive files in easy way. It support several compressors and archiver programs, chooses the right one(s) automatically and extracts one or more files using a single command. It supports rar, zip, tar.gz, deb, tar.gz2, rpm etc..

    Ubuntu / Debian: $ sudo apt-get install unp
    OpenSuSe user can use "1-click" installer to install Unp - here
    Fedora: # yum  install  unp

    Method 1 (unpack all archives in a directory)
    $ unp *.*
    Method 2 (unpack, for example, all .rar archives in a directory):
    $ unp *.rar
    Method 3 (unpack 1 archive):
    $unp archivefile
    Method 4 (unpack several archives at the same time):
    $unp archivefile1 archivefile 2
    More examples
    unp *.tar.gz, unp *, unp *.rpm, unp *.deb, unp *.zip, unp *.rar
    Check out the manual pages for unp for more details:
    $man unp ( for more help)
    Read more

    mz (Mausezahn) : Network Traffic Generation Tool for Linux

    Note: Mausezahn is basically a network and firewall testing tool. Don't use this tool when you are not aware of its consequences or have little knowledge about networks and data communication. If you get caught, or damage something of your own, then this is completely your fault.

    Mausezahn is a fast traffic generator written in C which allows you to send nearly every possible and impossible packet. Mausezahn can be used for example

     * As traffic generator (e. g. to stress multi-cast networks)
     * For penetration testing of firewalls and IDS
     * For DoS attacks on networks (for audit purposes of course)
     * To find bugs in network software or appliances
     * For reconnaissance attacks using ping sweeps and port scans
     * To test network behavior under strange circumstances (stress test, malformed packets, ...)

    ...and more. Mausezahn is basically a versatile packet creation tool on the command line with a simple syntax and on-line help. It could also be used within (bash-) scripts to perform combination of tests.

    OpenSuSe user can install Mausezahn using "1-click" installer  - here
    Ubuntu user can install it using command: sudo apt-get install mz

    Using Mausezahn:
    Send an arbitrary sequence of bytes through your network card 1000 times:
    # mz eth0 -c 1000 "ff:ff:ff:ff:ff:ff ff:ff:ff:ff:ff:ff cc:dd 00:00:00:ca:fe:ba:be"

    You can send more complex packets easily with the built-in packet builders using the -t option. Let's send a forged DNS response to host by impersonating the DNS server x.x.x.x:
    # mz eth0  -A x.x.x.x -B -t dns ", a="

    Perform a TCP SYN-Flood attack against all hosts in subnet
    # mz eth0 -c 0 -Q 50,100 -A rand -B -t tcp "flags=syn, dp=1-1023"

    Look at the Mausezahn man pages for more details.

    Note: Above examples are very dangerous command, use at your own risk, author ( is not responsible for any damage caused while using above examples. 

    Read more
    Related Posts with Thumbnails