The project new-cluster-test is used to deploy a GraphDB cluster through a configuration file. The scenario is, that you have several physical (or virtual) machines, called boxes, and on each of them you want to run one or more GraphDB nodes. (A "node" here means GraphDB master or GraphDB worker instance). Each node is run in tomcat, but several nodes can share one tomcat (depending on the configuration).
The tool can be used on one command-and-control machine, which may or may not participate in the cluster itself.
The Test Help Tool was specially created to run cluster tests, so it is tailored for such scenarios. It installs and runs test-help-tool.jar on each box. This tool has several roles, described below.
- Installation package
It consists of tomcat, GraphDB-Enterprise and their dependencies, packed in the jar. During deployment, they are unpacked in the appropriate places.
- Remote command executioner
It executes commands on the machine that it is running on. These are used both during deployment and during test (e.g. starting or stopping specific tomcat instance). While this role can be handled by SSH (and in fact is, during deployment) it's harder to configure it for testing on Windows machines, so the tests execute remote commands exclusively through the proxy
- IP port redirection (proxy)
The cluster can be deployed in a way, that communication between nodes passes through the proxies. This allows test scripts to simulate various network problems, like latency and disconnection
NOTE: The tool's remote exec is a security hole, so it's not advisable to open it on external network or on machines containing sensitive information!
All of the commands described below require the cluster configuration to run properly. Here, we'll describe the format and the meaning of the configuration file. The file must be called test.config and should be located in the user's home dir or in the tools directory on the command machine.
The file is read and interpreted one line at a time.
Lines with the hash symbol ('#') on first column are treated as comments and are ignored. These are the only "officially" recognized comments.
Blank lines are skipped.
The nodes in the cluster by default are configured to communicate through the proxy. This can be switched off by the following command:
This directive should come before the masters/workers description, and it makes sense to be the first non-comment line in the file.
The box declaration line looks like this:
It should start with the word "box" followed by a number. Boxes must be uniquely named, but if there are duplicates, the last one will be used.
Next comes the box's name or IP address ("server-or-ip").
Then the user ("onto") on this machine, who will run owlim. The command-and-control machine must have passwordless SSH access to this user/machine.
Next is the absolute path, where all owlim instances on this machine will be installed ("/space1/OWLIM").
The last parameter is the JMX port on which the proxy will listen. Don't use port numbers >= 20000 because they are used for redirects.
Next come master and worker declarations. They look like this:
Again, the line should start with the word "master" or "worker", followed by a number. The bar symbols ("|") add optional aliases, which can be used in the tests.
Next come the box on which the node should be installed ("box1").
After that is the repository name ("master"), followed by the tomcat port (10080) and tomcat box's home directory.
The main caveat here is, that these lines declare the tomcat instances as well. For example, the first line means that there will be tomcat on box1 running on port 10080. The home.master1 directory is relative to the box's home directory. Here is how it'll look:
- data (the sesame data dir)
- master (the repository name from the configuration line)
Note, that for each box, only one tomcat can run on port 10080, so if another node specifies the same box and port, it should also specify the same home dir, but must have different repository name.
Master worker links are declared as follows:
Additional Java options can be specified for all masters and for all workers as follows:
All commands are run through ant. The main commands are:
to deploy the cluster.
to get the cluster status
to start the cluster
to stop the cluster
to remove the deployment
- Using this tool, the cluster can be deployed only on Linux machines. The command & control machine can run Windows, Linux or OS X.
- Unrecognised configuration lines are ignored. Recognized, but bad configuration lines may or may not be reported.
- Other configuration errors may or may not be detected, reported or handled.
- The proxy tool creates a log file named /tmp/test-help-tool.log. It is not automatically deleted, so it can grow quite large.
This tool was created to improve our test installations. It has many hard-coded assumptions, bad configuration file syntax, has very bad error handling and reporting and probably quite a few bugs. Use at your own ris