GraphDB-Enterprise Administration

compared with
Key
This line was removed.
This word was removed. This word was added.
This line was added.

Changes (5)

View Page History
In the following sections, typical administrative tasks relating to the management of a GraphDB-Enterprise cluster instance are covered.

{toc}

After the instantiation of a master node, worker nodes can be added using a JMX client application, e.g. jconsole. From the MBeans tab, select the bean associated with the master node to be modified. Each bean will be named: ReplicationCluster/ClusterInfo/<repository_id>.

Worker nodes can be added using the addClusterNode operation, with the following parameters:
* repository URL;

If a master node fails completely, then the clients of the repository will start getting errors when trying to query or update the repository. In order to avoid this, use at least one additional master and the Client Failover API.

The Status attribute of the master mode indicates the cluster health according to the table:

Indicates that all worker nodes are synchronised and no problems have been detected. |
| 1 | Needs attention \\
A problem has prevented the cluster from synchronising the attached worker nodes. When this happens, no updates can be accepted (IsWritable becomes false) until all attached worker nodes are synchronised. If the AutoReplication attribute is true, then this will happen automatically and fairly quickly. Otherwise, manual replication must be initiated. When all workers are in synch, the Status attribute will return to 0. |
| 2 | Not available. \\
Indicates that no workers are available for processing requests. |
However, in situations where the network link between two data-centres is poor (slow, unreliable, high transience, etc) then a better approach for keeping the two data-centres (i.e. two GraphDB clusters) in synch will be to use the 'remote replication'. Using this feature, a master node of one cluster can be added as a pseudo-worker not to the other cluster making a hierarchy of clusters. Master nodes for remote clusters added in this way do not take part in query answering, but do receive all updates. Also many of the time-out and synchronisation parameters for the remote cluster are relaxed in order to cope with a more troublesome network layer.

When a remote master is added to another master, it has its own set of worker nodes. In such a configuration, each update handled by the remote master is slower, because it needs to update its own worker nodes. Before adding the remote master, set the RemoteMaster attribute to true on this node. This attribute value indicates that the remote master will receive only updates, but no read requests by the controlling master. The updates will be queued and sent asynchronously from the rest of the worker nodes so that no delay in the operation of the controlling master occurs. Incremental replications are made more likely when synchronising a remote master by increasing the tolerated distance between its current and expected state. Deep replications is are triggered when the remote master can not be made up-to-date, based on the sequence of stored updates in the transaction log. If a deep replication is necessary, a regular worker node is selected to send its contents and during the replication, the controlling master will not be in writable state. However, in cases when the remote master is only few updates behind, an incremental replication will occur and the controlling master will remain in a writable state and will be able to accept and process updates.