Introduction
The Client Failover Utility is designed to provide support for a failover scenario in which there is no access to the primary master node of the GraphDB Replication Cluster, such as connectivity issues. In this case, java exceptions are thrown, the failed master is flagged, and the Client Failover Utility retries accessing the next available master node.
At regular intervals (currently 15 seconds), the repository instance checks each of the failed master nodes and when it finds a reachable and active master, it returns it back for subsequent use.
Implementation
The Client Utility implements Openrdf Sesame's org.openrdf.repository.Repository interface and is instantiated by a configuration file, which describes the available Replication Cluster master nodes.
Procedure
1. Obtain an instance
The factory method to obtain an instance using a configuration file is:
2. Initialize the instance
The instance is initialized manually by invoking its initialize() method. Alternatively, the list of masters can be provided as arguments to the factory methods:
Support for custom HTTP headers
Each instance of ClientHTTPRepository implements an additional interface that provides the possibility for adding per-request additional HTTP headers:
 | They are stored in a map, wrapped as ThreadLocal. When they are no longer needed, the map should be removed. |
Control of the query handling and retry policy behavior
- Since revision 7299, we support two additional flags that control the handling and retry policy behavior when processing HTTP 503 responses and server side HTTP 4xx.
Both default values are "true" and they can be altered either by using the java runtime properties '-D' or by using the following method:
- A new method has been added to the ClientHTTPRepository. It is used to add/change the parameters of all underlying Apache httpClient instances through which the repository communicates with each of the registered master nodes:
- The user may acquire a connection from the repository instance that implements Openrdf Sesame's org.openrdf.repository.RepositoryConnection by invoking the Repository getConnection() method. All operations are available through that connection.
- The connection instance also implements another interface, which can be used to alter the behavior of the commit(), so it waits for any delayed updates (indicated by the HTTP202 result code) to finish before returning:
Configuration
The configuration file is a text file that describes the available master nodes by their URL location and, optionally, user and password credentials, if some security policy is applied to the SPARQL endpoints.
The URL should be preceded by the case insensitive keyword 'uri' or 'url' followed by the '=' sign and the rest of the line is trimmed of white space and used as url. A basic url scheme check is used to make sure the url content is a valid URL.
 | The user and password should follow the uri description line . They are denoted by 'user' for the username and 'pass' for the password. The symbol '#' in the beginning of a line and the double slash '//' are interpreted as comments, and are not processed. |
The following is an example of a client config:
Examples
Examples on how to instantiate a Client Utility and execute various operations such as evaluating query, SPARQL updates, add or remove data, etc.
Procedure
1. Instantiate the client utility.
The Client Utility java class is com.ontotext.repository.http.ClientHTTPRepository.
2. Initialize the client utility before acquiring any connection.
3. Invoke the shutdown() method to clean up all local resources, when the client utility is no longer necessary.
Query evaluation
The following code snippet shows how to evaluate a SPARQL query. The Constructor takes a single argument, which is the name of the configuration file.
In addition, one may also set the location folder where the config file is using the Repository's setDataDir() method.
The same location is used to temporary store any submitted updates that are still not confirmed as completed.
Executing a SPARQL update
The initialization code is the same and the way to execute the update follows the Sesame API.
Grouping updates into a single transaction
Here again the initialization is the same and the key points are to invoke begin() and commit() around the sequence of update operations.
Additional notes and dependencies
Logging based on slf4j and loggers ...