New Cluster Test (cluster deployment and test tool)

Version 1 by Dimitar Manov
on Jul 08, 2014 21:31.

compared with
Current by Gergana Petkova
on Sep 18, 2014 11:38.

Key
This line was removed.
This word was removed. This word was added.
This line was added.

Changes (51)

View Page History
The project new-cluster-test can be used to deploy owlim-5.6 cluster using a configuration file. The scenario is, that you have several physical (or virtual) machines, called boxes, on each of which you want to run one or more OWLIM nodes. (A "node" here means OWLIM master or OWLIM worker instance). Each node runs in tomcat, but several nodes can share one tomcat (depending on the configuration).
The project new-cluster-test is used to deploy a GraphDB cluster through a configuration file. The scenario is, that you have several physical (or virtual) machines, called boxes, and on each of them you want to run one or more GraphDB nodes. (A "node" here means GraphDB master or GraphDB worker instance). Each node is run in а tomcat, and several nodes can share one tomcat (depending on the configuration).

You use this tool on one command-and-control machine, which may or may not participate in the cluster itself.
The tool can be used on one command-and-control machine, which may or may not participate in the cluster itself.



h1. The Test Help Tool

The tool was created to run cluster tests, so it's tailored for such scenarios. It installs and runs {{test-help-tool.jar}} on each box. This tool has several roles, described below.
* *Installation package{*}{*}i{*}t carries tomcat, owlim-enterprise and their dependencies, packed in the jar. During deployment, these are unpacked in the appropriate places
The Test Help Tool was specially created to run cluster tests. It installs and runs {{test-help-tool.jar}} on each box. This tool has several roles as described below:
* *Installation package*
It consists of tomcat, GraphDB-Enterprise and their dependencies, packed in the jar. During deployment, they are unpacked in the appropriate places.
* *Remote command executioner*
It can execute commands on the machine, that it's running on. These are used both during deployment and during test (e.g. starting or stopping specific tomcat instance). While this role can be handled by SSH (and in fact is, during deployment) it's harder to configure it for testing on Windows machines, so the tests execute remote commands exclusively through the proxy
It executes commands on the machine where it is running. The commands are used during deployment and testing (e.g. starting or stopping a specific tomcat instance). Although this role can be handled by SSH (and, in fact, it is during deployment), it is harder to be configured for testing on Windows machines. Therefore the tests execute the remote commands exclusively through a proxy.
* *IP port redirection (proxy)*
The cluster can be deployed in a way, that communication between nodes passes through the proxies. This allows test scripts to simulate various network problems, like latency and disconnection
The cluster can be deployed in a way that the communication between nodes passes through the proxies. This allows test scripts to simulate various network problems such as latency and disconnection.

*NOTE:* The tool's remote exec is a security hole, so it's not advisable to open it on external network or on machines containing sensitive information\!
{note} The tool's remote exec is a security hole, so it is not advisable to open it on an external network or on machines containing sensitive information\!{note}



h1. Configuration file

All of the commands described below require the cluster configuration to run properly. Here, we'll describe the format and the meaning of the configuration file. The file must be called {{test.config}} and should be located in the user's home dir or in the tools directory on the command machine.
The configuration file must be called {{test.config}} and it should be located in the user's home directory or in the tools directory on the command machine.



h2. Format and semantics

The file is read and interpreted one line at a time.


h3. Comments and blank lines

Lines with the hash symbol ('#') on in the first column are treated as comments and are ignored. These are the only "officially" recognized comments.

Blank lines are skipped.


h3. Proxy configuration

The By default, the nodes in the cluster by default are configured to communicate through the proxy. This can be switched off by the following command:


{code}
NO_PROXY
{code}
This directive should come before the masters/workers description, and it makes sense to be the first non-comment line in the file.

This command should come before the masters/workers description, and it makes sense to be the first non-comment line in the file.

h3. Box declarations

{code}

It should start with the word "box" followed by a number. Boxes must be uniquely named, but if there are duplicates, the last one will be used.
It should start with the word "box", followed by a number. Boxes must have unique names, but in case of duplicates, the last one will be used.

Next comes The next part is the box's name or IP address ("server-or-ip").

Then the user ("onto") on this machine, who will run owlim. The command-and-control machine must have passwordless SSH access to this user/machine.
Then, the user ("onto") who runs GraphDB on this machine. The command-and-control machine must have SSH access without password to this user/machine.

Next is the absolute path, where all owlim instances on this machine will be installed ("/space1/OWLIM").
The next part is the absolute path where all GraphDB instances will be installed on this machine ("/space1/OWLIM").

The last parameter is the JMX port on which the proxy will listen. Don't use port numbers >= 20000 because they are already used for redirects.

h3. Master/worker declaration

Next come master and worker declarations. They look like this:
The master and worker declarations look like this:

{code}
worker1|w1|w1dc1 box1 worker 8080 home.worker1
{code}

Again, the line should start with the word "master" or "worker", followed by a number. The bar symbols ("\|") add optional aliases, which can be used in the tests.

Next come The next part is the box on which the node should be installed ("box1").

After that Then is the repository name ("master"), followed by the tomcat port (10080), and the tomcat box's home directory.

The main caveat here is, that these lines declare the tomcat instances as well. For example, the first line means that there will be tomcat on box1 running on port 10080. The home.master1 directory is relative to the box's home directory. Here is how it'll look:

Here is how it will look:

* /
** space1
******** SYSTEM

Note, that for each box, only one tomcat can run on port 10080, so if another node specifies the same box and port, it should also specify the same home dir, but must have different repository name.
{note} For each box, only one tomcat can run on port 10080. So if another node specifies the same box and port, it should also specify the same home dir but should have a different repository name.{note}

h3. Link declarations
All commands are run through [ant|http://ant.apache.org/]. The main commands are:


{code}
ant cluster-utils-deploy
ant cluster-utils-status
{code}
to get the cluster status.


ant cluster-utils-start-all
{code}
to start the cluster.


ant cluster-utils-stop-all
{code}
to stop the cluster.


ant cluster-utils-clear
{code}
to remove the deployment.


h1. Caveats

* Using When using this tool, the cluster can be deployed only on Linux machines. The command & control machine can run Windows, Linux, or OS X.
* Unrecognised configuration lines are ignored. Recognized, but bad configuration lines may or may not be reported.
* Other configuration errors may or may not be detected, reported, or handled.
* The proxy tool creates a log file named {{/tmp/test-help-tool.log}}. It is not automatically deleted, so it can grow quite large.


h1. Disclaimer

This tool was created to improve our test installations. It has many hard-coded assumptions, bad configuration file syntax, has very bad error handling and reporting, and probably quite a few bugs. Use at your own risk.