Thursday, May 11, 2006 

IRIX: CHECK FOR NEW CONFIGS

SGI loves to try and simplify your life with chkconfig switches to toggle various services on
and off. After each upgrade, DOUBLE check the chkconfig switches. If something doesn't
work all of the sudden check here.

For instance, tape drives in IRIX are disabled by a default install. To enable a tape subsystem:

# chkconfig ts on

# /etc/init.d/ts start

Friday, May 05, 2006 

KILLING MORE USERS

To kill all processes of a particular user from root at unix prompt type:

# kill -9 `ps -fu username |awk '{ print $2 }'|grep -v PID`

We can also use the username as an argument and pass it from command line, if this command is put as a script.

 

CLEANING DIRECTORIES

The creation of many temporary files in Unix during compilations, occupies a lot of memory space. This can be got rid of by using a simple script.

find $1 \( -name a.out -o -name '*.o' -o -name 'core' -o -name '*.ii' -o -name '*.ti' -o -name '*.class' -o -name '*.pur' \) -exec rm {} \;

Save the above line in a file and run this file after changing permissions. For example, if the file containing the above code has the name 'clean', then:

example% clean

will remove all the files specified in the script in the directory and all other subdirectories within it. You can add or remove any number of files in the script, to suit your needs.

 

GREP TEXT NOT BINARY

In some directories such as /etc you have a mix of file types. You may want to grep out a string from one of the files but don't want to worry about the binaries, data, etc. To accomplish
this, searching only text files do this:

grep `file * | egrep 'script|text' | awk -F: '{print $1}'`

 

REBOOTING BECAUSE OF FORK BOMBS

There is nothing more frustrating for an Administrator who has to reboot system due to fork bomb

(the number of processes in the system reaches the maximum limit when a user, even a superuser, tries to execute some command, the system will respond with Vfork failed)

In Solaris under SPARC, this can be controlled by specifying a line in /etc/system

set maxuprc=64

And reboot the system. Now a user can have maximum of 64 processes under his ownership. By default the 'maxuprc' value is 16*maxusers - 5 where 'maxusers' is another tunable
parameter in /etc/system

Caution : You should have a backup of /etc/system file before you make the changes. So that you can revert back to old system file using boot -a option in case of
inconsistent system file.

Friday, April 14, 2006 

Get the hidden files

A safe way of grabbing all "hidden" files is to use '.??*' rather than '.*' since this will only match 3 or more characters. Admittedly, this will miss any hidden files that are only a single character after the ., but it will always miss '.' & '..', which is probably more important...

Wednesday, April 12, 2006 

Building a Linux supercomputer using SSH and PVM

If you have a couple of old Linux boxes sitting around, then you've got the makings of a supercomputer. Dust them off, install Secure Shell (SSH) and Parallel Virtual Machine (PVM), and start your complex algorithms.

All right, it's not quite as simple as that. PVM handles only the messaging between the machines. You must write your own programs to actually do anything.

First, network your PCs and set up NFS on each. I'm not going to go into detail because most Linux distributions take care of everything for you. With Debian, for example, simply connect a cable between your new PC and your network switch, stick in your installation CD, switch the PC on, and follow the prompts. If you need more information, take a look at the Linux.com how-tos on networking and NFS.

Now you can start setting up your PCs as a single supercomputer. In order for them to work as one, you need a single home directory -- hence, the need for NFS. Choose the machine that hosts the home directory and edit /etc/exports. If the file isn't there, then you must set up the PC as an NFS server -- check your distro's documentation. If you're using Debian, simply type sudo apt-get install nfs-kernel-server.

Now add in the details for each of the hosts where you want the common home directory. In this example, I'm exporting my home directory from polydamas (my NFS server) to three hosts: acamas, cassandra, and hector:

/home acamas(rw)
/home cassandra(rw)
/home hector(rw)

You can see the full list of possible options when exporting by typing man exports on the command line. Don't forget to add all hosts into your /etc/hosts file too.

Now either reboot your NFS server or check your distro's documentation for the relevant command that lets your hosts see the exports. On Debian, the command is exportfs –a.

You can now turn to your NFS client hosts and set them up so that they use the home directory that you're exporting from the NFS server. If you feel that exporting the whole /home is overkill, simply export the home directory for the user that you want to be able to run the supercomputer.

If you're confident that everything is going to work, then just move the current /home somewhere safe (don't forget to rename it /home_old). Run mkdir /home, then edit your /etc/fstab file so that it contains the details for the NFS server:

polydamas:/home /home nfs rw,sync 0 0

Make sure that your /etc/hosts file contains the IP address for your server, then either reboot or reload the NFS data:

sudo /etc/init.d/mountnfs.sh

If you're not quite that brave, mount the directories manually before you commit to automating the process fully.

Set up SSH

Now that you have a common /home, you need SSH. Chances are, your Linux distribution came bundled with SSH. Each of my machines uses Debian, which loads OpenSSH automatically.

Set up SSH so that you don't have to enter a password each time you use it. For more information, take a look at Joe Barr's "CLI Magic: OpenSSH" and Joe 'Zonker' Brockmeier's "CLI Magic: More on SSH."

You'll find yourself benefiting from a common /home directory. Instead of having to set up an authorized_keys2 file on each machine, you only have to do it once on the NFS server:

ssh-keygen -t dsa
cat .ssh/id_dsa.pub > .ssh/authorized_keys2

If you just want to be able to run processes in parallel, then you're ready to go.

Looking for more? You might want to create programs that use the resources of all of your machines. Let's say you have three Linux boxes connected to your network, and you have three Linux scripts sitting in your home directory that you need to process. Simply run each one via SSH:

#Run the files on the machines
ssh bainm@acamas ./batch_file1 &
ssh bainm@cassandra ./batch_file2 &
ssh bainm@hector ./batch_file3 &

You can distribute work around your network easily using this technique. Although useful, the scripts don't provide any feedback. You must check each machine manually for the progress of each file before you continue with your computations. However, you can add feedback by making each of the distributed files write its results back to a common file on your home directory.

In this next example, you can calculate pi to any number of decimal places:

#File name: calc_pi
RESULT_FILE=$1
DECIMAL_PLACES=$2
RESULT=$(echo "scale=$DECIMAL_PLACES;4*(4*a(1/5)-a(1/239))"|bc -l)
echo "$(uname -n) Pi: $RESULT" >> $RESULT_FILE

I calculated pi = 4 x ( 4arctan(1/5) - arctan(1/239) because that's what I was taught in college; there are other ways.

Now tell each of your machines to run a process:

ssh bainm@acamas . ./calc_pi pi_results 10 &
ssh bainm@cassandra . ./calc_pi pi_results 20 &
ssh bainm@hector . ./calc_pi pi_results 30 &

After a couple of seconds, a new file (pi_results) contains this code:

acamas Pi: 3.1415926532
cassandra Pi: 3.14159265358979323848
hector Pi: 3.141592653589793238462643383272

Let PVM do the work for you

While this is useful to know, you're probably better off using software that does all the work for you. If you're happy using C, C++, or Fortran, then PVM may be for you. Download it from the PVM Web site, or check if you can load it using your distro's methods. For instance, use this command on Debian:

sudo apt-get install pvm

Install PVM on all of the machines, then log on to the computer you want to use as your central host. If it's not your NFS server, remember to generate a key for it and add it to the .ssh/authorized_keys2 file. Once you start PVM by typing pvm on the command line, you can start adding hosts. Don't worry about starting PVM on the other machines -- that's done automatically when you add a host.

$ pvm
pvm> add acamas
add acamas
1 successful
HOST DTID
acamas 80000
pvm>

If that seems a bit long-winded, then list your hosts in a file and get PVM to read it:

$ pvm hostfile

Type conf to check which hosts are loaded:

pvm> conf
conf
4 hosts, 1 data format
HOST DTID ARCH SPEED DSIG
cassandra 40000 LINUX 1000 0x00408841
acamas 80000 LINUX 1000 0x00408841
hector c0000 LINUX 1000 0x00408841
polydamas 100000 LINUX 1000 0x00408841
pvm>

Type quit to exit PVM and leave it running in the background. Type halt to shut down PVM.

Now you can create a program that uses PVM. You need the PVM source code. As always, check the details for your distro -- usually, you can get the files easily. For example, Debian uses this command:

sudo apt-get install pvm-dev

You need the files on only one of your machines; thanks to the common home directory, you can use any of them. Create a directory called ~/pvm3/examples and look for a file called examples.tar.gz -- you'll probably find it in /usr/share/doc/pvm. Unpack this into the directory you just created. You'll see a set of self-explanatory files that show you exactly how to program with PVM. Start with master1.c and its associated file slave1.c. Examine the source code to see exactly how the process operates. Use this code to see it in action type:

aimk master1 slave1

aimk -- the program for compiling your PVM programs -- creates your executables and places them in ~/pvm3/bin/LINUX. Simply change to this directory and type master1. Assuming you're on the machine where you're running PVM, you should see something like this:

$ master1
Spawning 12 worker tasks ... SUCCESSFUL
I got 1300.000000 from 7; (expecting 1300.000000)
I got 1500.000000 from 8; (expecting 1500.000000)
I got 100.000000 from 1; (expecting 100.000000)
I got 700.000000 from 4; (expecting 700.000000)
I got 1100.000000 from 0; (expecting 1100.000000)
I got 1700.000000 from 9; (expecting 1700.000000)
I got 1900.000000 from 10; (expecting 1900.000000)
I got 2100.000000 from 11; (expecting 2100.000000)
I got 1100.000000 from 6; (expecting 1100.000000)
I got 900.000000 from 5; (expecting 900.000000)
I got 300.000000 from 2; (expecting 300.000000)
I got 500.000000 from 3; (expecting 500.000000)

If you're a Fortran programmer, don't worry -- there are some examples for you as well. Other languages don't offer examples, but look on the PVM Web site for support for numerous languages, including Perl, Python, and Java. You'll also find various applications to help with PVM, such as XPVM for a graphical interface.

Tuesday, April 04, 2006 

Accessing Positional Variables

Accessing positional variables from the left is simple using Bourne, bash, or KornShell93.
ksh88 will not work and csh's syntax I won't cover it.

You can use the form of ${*:1:1} where the 1st 1 is the number of which positional variable you want to access and the 2nd is the number of variables following it that you want.

So, if you type:

set a b c d, echo ${*:3:1}

will give you c.

You can be sneakier using a looping construct. This will give you the positional parameters in order:

for i in 1 2 3 4
do
echo ${*:$a:1}
done

Suppose you want to reference the 2nd to last positional parameter.
Try ${*:$#-1:1}.

${*:$#:1} gives you the last one. Even sneakier, here's the positional variables in reverse order.

for i in 1 2 3 4
do
echo ${*:$#-$i:1}
done

Beware: different shells act slightly different. Bash, for the above example of set a b c d, gives d for echo ${*:$#-5:1} while ksh gives ksh.

Monday, April 03, 2006 

Linux Renaming Sets of Files

Easy way to rename a set of files in Linux:

*** Need not use any SHELL script
*** or any other prog.
*** simply use the rename command

**SYNTAX:**
% rename . .

eg: To rename all .jpg to .gif use:

% rename .jpg .gif *

To rename only .jpg with starting
letter as t to .gif use:

% rename .jpg .gif t*

 

Separate Shell Command history

One other suggestion on how to separate shell command history files by terminal (It works on
HP-UX):

Change the permissions of your
.sh_history to 000.

$ chmod 000 .sh_history

After this is done, the various shells that may exist will not save every command to the file
thus keeping the history in memory. Since every memory is different, as every shell is run
in a different process, every shell will have its own history (in memory) of commands.

Thursday, March 30, 2006 

NFS between Solaris & Linux

If you receive error messages such as "unknown version" when attempting to mount a Linux based NFS server from Solaris, you probably have an incompatibility between the NFS versions
running on both of them. Linux uses version 2, while Solaris uses version 3. In order to get the machines to communicate, you have to use the vers option on the Solaris machine as follows:

mount -o vers=2 nfsserver:/remotedir /localdir

 

How To Set Up Database Replication In MySQL

This tutorial describes how to set up database replication in MySQL. MySQL replication allows you to have an exact copy of a database from a master server on another server (slave), and all updates to the database on the master server are immediately replicated to the database on the slave server so that both databases are in sync. This is not a backup policy because an accidentally issued DELETE command will also be carried out on the slave; but replication can help protect against hardware failures though.

In this tutorial I will show how to replicate the database exampledb from the master with the IP address 192.168.0.100 to a slave. Both systems (master and slave) are running Debian Sarge; however, the configuration should apply to almost all distributions with little or no modification.

Both systems have MySQL installed, and the database exampledb with tables and data is already existing on the master, but not on the slave.

I want to say first that this is not the only way of setting up such a system. There are many ways of achieving this goal but this is the way I take. I do not issue any guarantee that this will work for you!

1 Configure The Master

First we have to edit /etc/mysql/my.cnf. We have to enable networking for MySQL, and MySQL should listen on all IP addresses, therefore we comment out these lines (if existant):

#skip-networking
#bind-address = 127.0.0.1

Furthermore
we have to tell MySQL for which database it should write logs (these logs are used by the slave to see what has changed on the master),
which log file it should use, and we have to specify that this MySQL


log-bin = /var/log/mysql/mysql-bin.log
binlog-do-db=exampledb
server-id=1

Then we restart MySQL:

/etc/init.d/mysql restart

Then we log into the MySQL database as root and create a user with replication privileges:

mysql -u root -p
Enter password:

Now we are on the MySQL shell.

GRANT REPLICATION SLAVE ON *.* TO 'slave_user'@'%' IDENTIFIED BY ''; (Replace with a real password!)
FLUSH PRIVILEGES;

Next (still on the MySQL shell) do this:

USE exampledb;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;

The last command will show something like this:

+---------------+----------+--------------+------------------+
| File | Position | Binlog_do_db | Binlog_ignore_db |
+---------------+----------+--------------+------------------+
| mysql-bin.006 | 183 | exampledb | |
+---------------+----------+--------------+------------------+
1 row in set (0.00 sec)

Write down this information, we will need it later on the slave!

Then leave the MySQL shell:

quit;


There are two possibilities to get the existing tables and data from exampledb from the master to the slave. The first one is to make a database dump, the second one is to use the LOAD DATA FROM MASTER; command on the slave. The latter has the disadvantage the the database on the master will be locked during this operation, so if you have a large database on a high-traffic production system, this is not what you want, and I recommend to follow the first method in this case. However, the latter method is very fast, so I will describe both here.

If you want to follow the first method, then do this:

mysqldump -u root -p --opt exampledb > exampledb.sql (Replace with the real password for the MySQL user root! Important: There is no space between -p and !)

This will create an SQL dump of exampledb in the file exampledb.sql. Transfer this file to your slave server!

If you want to go the LOAD DATA FROM MASTER; way then there is nothing you must do right now.


Finally we have to unlock the tables in exampledb:

mysql -u root -p
Enter password:
UNLOCK TABLES;
quit;

Now the configuration on the master is finished. On to the slave...

2 Configure The Slave

On the slave we first have to create the database exampledb:

mysql -u root -p
Enter password:
CREATE DATABASE exampledb;
quit;


If you have made an SQL dump of exampledb on the master and have transferred it to the slave, then it is time now to import the SQL dump into our newly created exampledb on the slave:

mysql -u root -p exampledb < /path/to/exampledb.sql (Replace with the real password for the MySQL user root! Important: There is no space between -p and !)

If you want to go the LOAD DATA FROM MASTER; way then there is nothing you must do right now.


Now we have to tell MySQL on the slave that it is the slave, that the master is 192.168.0.100, and that the master database to watch is exampledb. Therefore we add the following lines to /etc/mysql/my.cnf:

server-id=2
master-host=192.168.0.100
master-user=slave_user
master-password=secret
master-connect-retry=60
replicate-do-db=exampledb

Then we restart MySQL:

/etc/init.d/mysql restart


If you have not imported the master exampledb with the help of an SQL dump, but want to go the LOAD DATA FROM MASTER; way, then it is time for you now to get the data from the master exampledb:

mysql -u root -p
Enter password:
LOAD DATA FROM MASTER;
quit;

If you have phpMyAdmin installed on the slave you can now check if all tables/data from the master exampledb is also available on the slave exampledb.


Finally, we must do this:

mysql -u root -p
Enter password:
SLAVE STOP;

In the next command (still on the MySQL shell) you have to replace the values appropriately:

CHANGE MASTER TO MASTER_HOST='192.168.0.100', MASTER_USER='slave_user', MASTER_PASSWORD='', MASTER_LOG_FILE='mysql-bin.006', MASTER_LOG_POS=183;

  • MASTER_HOST is the IP address or hostname of the master (in this example it is 192.168.0.100).
  • MASTER_USER is the user we granted replication privileges on the master.
  • MASTER_PASSWORD is the password of MASTER_USER on the master.
  • MASTER_LOG_FILE is the file MySQL gave back when you ran SHOW MASTER STATUS; on the master.
  • MASTER_LOG_POS is the position MySQL gave back when you ran SHOW MASTER STATUS; on the master.

Now all that is left to do is start the slave. Still on the MySQL shell we run

START SLAVE;
quit;

That's it! Now whenever exampledb is updated on the master, all changes will be replicated to exampledb on the slave. Test it!

Links

Sunday, March 26, 2006 

My sysadmin toolbox

Torsmo

Torsmo is a desktop system monitoring tool, and one of the best I have ever used.

Torsmo differs from other system monitors, such as GKrellM, in that it does not spawn a new window, but instead renders text directly to your desktop. It can display almost anything about your system, including uptime, current CPU usage, network activity, hard drive usage, memory usage, and swap usage. The program's developers wrote it to use as little of your system's resources as possible, and it does a good job of this.

You can configure what torsmo displays through its configuration file, normally found in your home directory as .torsmorc. You can look at my configuration file at http://realfolkblues.org/torsmorc.

ImageMagick

ImageMagick makes it easy to perform many operations on images directly from the command line. Among its many useful tools, identify is used to display information about an image, import can save any window on an X server to an image file, and convert can convert an image to almost any format with a single command.

You can use identify to show detailed information about a photo or image by running identify imagename . For example:

jon@gimli:~$ identify /media/pics/Group-Photo.jpg
/media/pics/Group-Photo.jpg JPEG 819x614 DirectClass 371kb 0.050u 0:01

I often use import to take a screenshot of my desktop. For example, to save a screenshot of your desktop to your home directory as a PNG image named screenshot.png, run import -window root $HOME/screenshot.png. ImageMagick will save the screenshot in the image format specified by the file extension.

Using convert to convert an image from one format to another could not be easier -- just run convert imagename.png imagename.jpg . Again, ImageMagick takes the format from the extension, so you don't need to give it an additional option to specify the new format.

Aterm

While KDE and GNOME come with their own terminal applications, these applications do much more than I need. Aterm, on the other hand, is a simple terminal program with fewer features, so it appears almost instantly when you start it. Though it's not as bulky as other terminal emulators, Aterm does have many useful options, which you can read about by typing aterm --help at the command line.

I like to start Aterm with a black background, white text, a 1,000-line history buffer, and display all text using the font "drift" from the "artwiz" font family. To get this, use aterm -bg black -fg white -sl 1000 -fn "-artwiz-drift-*".

Root-tail

Sometimes I use tail -f to monitor logfiles for changes. While useful, it's awkward to have a terminal window open all the time to monitor a logfile. Root-tail provides an excellent alternative by displaying logfiles as text rendered on your desktop in whatever font and color you specify. It also updates the text on your desktop at the interval you specify.

To use root-tail, just run root-tail filename to monitor a file. Root-tail has many useful options, which you can see by typing root-tail --help, or just read its man page.

Quod Libet

Out of the hordes of music players available for Linux, Quod Libet is my favorite. One of the things I like about Quod Libet is its ability to make playlists based on regular expressions. You can operate the player from the command line by running the program with an argument, making it simple to set up hotkeys with KDE to control the player. For instance, if I go to the KDE Control Center hotkey section and add a hotkey such as Control-Alt-X to run the command quodlibet --play, I can then simply press Control-Alt-X to cause Quod Libet to play music. See all of the command line arguments that are available by running quodlibet --help.

Quod Libet has excellent ID3 tag editing, with the ability to edit tags based on the filenames of songs, rename files based on their tags, and change many files at once. Quod Libet also supports album cover art.

In addition to its native features, Quod Libet also has an extensive collection of plugins that can greatly extend its functionality. One particularly interesting plugin, Animated On-Screen Display, can display information about the music Quod Libet is playing. See the full list of plugins on the Quod Libet site.

Transmission

While I normally use Azureus as my BitTorrent client, it's fairly resource-intensive, and that makes it less appealing for me. In situations where I need a simple and fast client, I use Transmission. It handles torrents using a fraction of the memory and CPU time that Azureus uses. Unlike Azureus, it has the ability to run all of the torrents on a single port, removing the need to allow entire port ranges through a firewall in order to use the program.

Transmission is perfect for users who occasionally need to download a torrent. While Azureus uses Java to draw its interface, Transmission uses GTK+, helping it fit in perfectly with a GNOME desktop. Transmission also sports a command-line interface that is especially useful when you must run it in a remote environment.

 

Network Monitoring with Zabbix

The ZABBIX server requires the following system resources:



* 10 MB of disk space (100 MB recommended)
* 64 MB of RAM (128 MB recommended)
* MySQL or PostgreSQL as backend database


First we define 2 locations:

The Server, here comes all the info together and is processed in a database, note that the server can be monitored to so it runs an agent too.


The Agent, Information is gathered and polled by the server.



Setup of the Server:

http://prdownloads.sourceforge.net/zabbix/zabbix-1.1beta7.tar.gz?download



1 - Make the zabbix user and group

groupadd zabbix
useradd -c 'Zabbix' -d /home/zabbix -g zabbix -s /bin/bash zabbix


mkdir /home/zabbix
chown -R zabbix.zabbix /home/zabbix




2 - Untar the sources

cd /home/zabbix

tar zxvpf zabbix-1.1beta7.tar.gz

mv zabbix-1.1beta7 zabbix

cd zabbix

chown -R zabbix.zabbix .

su - zabbix





3 - Create a zabbix database and populate it

mysql -p -u root

create database zabbix;
quit;

cd create/mysql

mysql -u root -p zabbix < schema.sql

cd ../data

mysql -u root -p zabbix < data.sql
mysql -u root -p zabbix < images.sql

cd ../../




4 - Configure compile and install the server
We run an agent on the server to so we compile that too ;)

./configure --prefix=/usr --with-mysql --with-net-snmp --enable-server --enable-agent &&
make
su
make install
exit




5 - Prepare the rest of the system

As root edit

/etc/services

Add:

zabbix_agent 10050/tcp # Zabbix ports
zabbix_trap 10051/tcp




mkdir /etc/zabbix

cp misc/conf/zabbix_agentd.conf /etc/zabbix/
cp misc/conf/zabbix_server.conf /etc/zabbix/




Edit /etc/zabbix/zabbix_agentd.conf

make sure that the Server parameter points to the server addres,
for the agent that runs on the server its like this:

Server=127.0.0.1



Edit /etc/zabbix/zabbix_server.conf

For small sites this default file will do, however if you are into
tweaking your config for your 10+ hosts site, this is the place



Start the server

su - zabbix
zabbix_server
exit




Start the client:

su - zabbix
zabbix_agentd
exit




6 - Configure web interface

Edit frontends/php/include/db.inc.php:

$DB_TYPE ="MYSQL";
$DB_SERVER ="localhost";
$DB_DATABASE ="zabbix";
$DB_USER ="root";
$DB_PWD ="secret";


mkdir /home/zabbix/public_html
cp -R frontends/php/* /home/zabbix/html/




Edit /etc/apache/httpd.conf

Make this work:


AllowOverride FileInfo AuthConfig Limit Indexes
Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec

Order allow,deny
Allow from all


Order deny,allow
Deny from all



/etc/init.d/apache restart


chown -R zabbix.zabbix public_html




Setup of an Agent

http://prdownloads.sourceforge.net/zabbix/zabbix-1.1beta7.tar.gz?download



1 - Make the zabbix user and group

groupadd zabbix
useradd -c 'Zabbix' -d /home/zabbix -g zabbix -s /bin/bash zabbix

mkdir /home/zabbix
chown -R zabbix.zabbix /home/zabbix




2 - Untar the sources

cd /home/zabbix

tar zxvpf zabbix-1.1beta7.tar.gz

mv zabbix-1.1beta7 zabbix

cd zabbix

chown -R zabbix.zabbix .

su - zabbix




3 - Configure compile and install the agent

./configure --prefix=/usr --with-mysql --with-net-snmp --enable-agent
make
su
make install
exit

mkdir /etc/zabbix
cp misc/conf/zabbix_agentd.conf /etc/zabbix/



Edit /etc/zabbix/zabbix_agentd.conf

make sure that the Server parameter points to the server addres

Server=xxx.xxx.xxx.xxx



4 - Prepare the rest of the system

Edit /etc/services

Add:

zabbix_agent 10050/tcp # Zabbix ports
zabbix_trap 10051/tcp





5 - Start the agent

su - zabbix
zabbix_agentd
exit





What Next ?

Now point your browser to:

http:www.example.com/~zabbix


Login with username: Admin
No Password

Here you can really knock your self out.
This howto intended to show you how to install this mother.
Configuring the monitoring functions is and whole other ballgame.

For now i leave you here with some pointers to documentation

http://www.zabbix.com/documentation.php
http://sourceforge.net/projects/zabbix
http://www.google.com/search?q=zabbix

About me

  • I'm Adrian
  • From Manila, Philippines
  • Humankind cannot gain anything without first giving something in return. To obtain, something of equal value must be lost. That is alchemy's first law of Equivalent Exchange.
My profile

Links

    Add to Google Add Mox Diamond to Newsburst from CNET News.com Subscribe in NewsGator Online Subscribe in FeedLounge Add to netvibes

Add this blog to my Technorati Favorites!
Powered by Blogger
and Blogger Templates