|
Key
This line was removed.
This word was removed. This word was added.
This line was added.
|
Changes (49)
View Page HistoryThis section is intended for system administrators that administrators, which are already familiar with GraphDB-SE and the Sesame openRDF infrastructure, who wish to configure their system for optimal performance. Best performance is typically measured by the shortest load time and fastest query answering. Many factors affect the performance of a GraphDB-SE repository in many different ways. This section is an attempt to bring together all factors that affect performance, however it is measured.
{toc}
h1. Memory configuration
Memory configuration is the single most important factor for optimising the performance of GraphDB-SE. In every respect, the more memory available, the better the performance. The only question, is how to divide up the available memory between the various GraphDB-SE data structures in order to achieve the best overall behaviour.
h2. Setting the maximum Java heap space
The maximum amount of heap space used by a Java virtual machine (JVM) is specified using the {{\-Xmx}} virtual machine parameter. The value should be no higher than the amount of free memory available in the target system multiplied by some factor to allow for extra runtime overhead, say approximately \~90%.
For example, if a system has 16GB total RAM and 1GB is used by the operating system, services etc, then ideally the JVM that hosts the application using GraphDB-SE would should have a maximum heap size of 15GB (16-1) and would can be set using the JVM argument: {{\-Xmx15g}}.
h2. Data structures
The heap space available is used by:
* the JVM, the application and GraphDB-SE workspace (byte code, stacks, etc.);
* data structures for storing entities affected by specifying {{entity-index-size}};
* data structures for indexing statements specified using {{cache-memory}}.
Simplistically, In other words, the memory required for storing entities is determined by the number of entities in the dataset, where the memory required is 4 bytes per slot, allocated by {{entity-index-size}}, plus 12 bytes for each stored entity.
However, the memory required for the indices (cache types) depends on which indices are being used. The {{SPO}} and {{PSO}} indices are always used. Optional indices include {{predicateLists}}, the context indices {{PCSO}} / {{PSOC}}, and the FTS (full-text search) indices.
The memory allocated to these cache types can be calculated automatically by GraphDB-SE, however some of them can be specified in a more fine-grained way. The following configuration parameters are relevant:
{noformat}

h2. Running in a servlet container
If the GraphDB-SE repository is being hosted by the Sesame HTTP servlet, then the maximum heap space will apply applies to the servlet container (tomcat). In which case, allow some more heap memory for the runtime overhead, especially if running at the same time as other servlets. Also, some configuration of the servlet container might improve performance, e.g. increasing the permanent generation, which by default is 64MB. Quadrupling (for tomcat) with {{\-XX:MaxPermSize=256m}} might help. Further information can be found in the tomcat documentation.
h1. Delete operations
GraphDB-SE's inference policy is based on materialisation, where implicit statements are inferred from explicit statements as soon as they are inserted in to the repository, using the specified semantics {{ruleset}}. This approach has the advantage that query answering can be achieved very quickly, since no inference needs to be done at query time.
However, no justification information is stored for inferred statements, therefore deleting a statement would normally requires a full re-computation of all inferred statements, which can take a very long time for large datasets.
GraphDB-SE uses a special technique for handling the deletion of explicit statements and their inferences, called *smooth delete*.
h2. Algorithm
The algorithm used to identify and remove those inferred statements that can no longer be derived, using the explicit statements being deleted, is as follows:
# Use forward chaining to determine what statements can be inferred from the statements marked for deletion;
# Use backward chaining to see if these statements are still supported by other means;
# Delete explicit statements and the no longer supported inferred statements.
h2. Problem
The difficulty with the current algorithm is that almost all delete operations will follow inference paths that touch schema statements, which then lead to almost all other statements in the repository. This can lead to *smooth delete* taking a very long time indeed.
h2. Solution

{noformat}Statements {{\[<Reviewer40476> rdf:type owl:Thing\]}}, etc, exist because of the statements {{\[<Reviewer40476> rdf:type <MyClass>\]}} and {{\[<MyClass> rdfs:subClassOf owl:Thing\]}}.
In large datasets there are typically millions of statements {{\[X rdf:type owl:Thing\]}}, and they will are all be visited by the algorithm.
The {{\[X rdf:type owl:Thing\]}} statements are not the only problematic statements that will be are considered for removal. Every class that has millions of instances will lead leads to similar behaviour.
One check to see if a statement is still supported requires around 30 query evaluations with {{owl-horst}}, hence the slow removal.
If {{\[owl:Thing rdf:type owl:Class\]}} was is marked as an axiom (because it is derived by statements from the schema, which must be axioms), then the process would stop stops when reaching this statement. So in the current version, the schema (the system statements) must necessarily be imported through the {{imports}} configuration parameter in order to mark the schema statements as axioms.
h2. Schema transactions
As mentioned above, ontologies and schemas, imported at initialisation time using the 'imports' configuration parameter, are flagged as read-only. However, there are times when it is necessary to change a schema and this can be done inside a 'system transaction'.
The user instructs GraphDB that the transaction is a system transaction by including a dummy statement with the special schemaTransaction predicate, i.e.
{noformat}
{noformat}

Predicate lists are two indices ({{SP}} and {{OP}}) that can improve performance in two separate situations:
* Loading/querying datasets that have a large number of predicates;
* Executing queries or retrieving statements that use a wildcard in the predicate position, for example using the statement pattern: {{dbpedia:Human ?predicate dbpedia:Land}}.
As a rough guideline, a dataset with more than about 1000 predicates will benefit from using these indices for both loading and query answering. Predicate list indices are not enabled by default, but can be switched on using the {{enablePredicateList}} configuration parameter.

h2. Caching literal language tags
This optimizsation applies when the repository contains a large number of literals with language tags, and it is necessary to execute queries that filter based on language, e.g. using the following SPARQL query construct:
{{FILTER ( lang(?name) = "ES" )}}
In this situation, the {{in-memory-literal-properties}} configuration parameters can be set to {{true}}, causing the data values with language tags to be cached.
In this situation, the {{in-memory-literal-properties}} configuration parameters can be set to {{true}}, causing the data values with language tags to be cached.

The presence of many {{owl:sameAs}} statements -- such as when using several LOD datasets and link sets -- causes an explosion in the number of inferred statements. For a simple example, if A is a city in country X, and B and C are alternative names for A, and Y an Z are alternative names for X, then the inference engine should infer: B in X, C in X, B in Y, C in Y, B in Z, C in Z also.
As described in the GraphDB-SE user guide, GraphDB-SE avoids the inferred statement explosion, caused by having many {{owl:sameAs}} statements, by grouping equivalent URIs in to a single master node, and using this for inference and statement retrieval. This is in effect a kind of backward chaining that chaining, which allows all the sound and complete statements to be computed at query time.
This optimisation can save a large amount of space for two reasons:
# A single node is used for all N URIs in an equivalence class, which avoids storing N {{owl:sameAs}} statements;
# A single node is used for all N URIs in an equivalence class, which avoids storing N {{owl:sameAs}} statements;
# If there are N equivalent URIs in one equivalence class, then the reasoning engine should infer infers that all URIs in this equivalence class are the equivalent to each other (and themselves), i.e. another N{^}2^ {{owl:sameAs}} statements can be avoided.
During query answering, all members of each equivalence appearing in a query are substituted to generate sound and complete query results. However, even though the mechanism to store equivalences is standard and cannot be switched off, it is possible to prevent the enumeration of equivalence equivalent classes during query answering. When using a dataset with many {{owl:sameAs}} statements, turning off the enumeration can dramatically reduce the number of _duplicated_ query results, where a single URI from each equivalence class is chosen to be representative.
To turn off the enumeration of equivalent URIs, a special pseudo-graph name can be is used:
{{FROM/FROM NAMED <[http://www.ontotext.com/disable-SameAs]>}}
Two different versions of a query are shown below with and without this special graph name. The queries are executed against the [factforge|http://factforge.net/] combined dataset:
Two different versions of a query are shown below with and without this special graph name. The queries are executed against the [factforge|http://factforge.net/] combined dataset:

fb:guid.9202a8c04000641f800000000004ae12
opencyc-en:CommercialAirport
opencyc-en:CommercialAirport
{noformat}Whereas the same query with the pseudo-graph that prevents equivalence class enumeration:
{noformat}PREFIX dbpedia: <http://dbpedia.org/resource/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>

opencyc-en:CommercialAirport
{noformat}
{noformat}
h1. Reasoning complexity

* owl2-rl (highest complexity, worst performance)
It should be noted that all rules affect the loading speed, even if they never actually infer any new statements. This is because as new statements are added, they are pushed in to every rule to see if anything is inferred. Often this This can often result in many joins being computed, even though the rule never 'fires'.
h2. Custom rulesets
If better load performance is required and it is known that the dataset does not contain anything that will apply to certain rules, then they can be omitted from the ruleset. To do this, copy the appropriate '.pie' file included in the distribution and remove the unused rules. Then set the ruleset configuration parameter to the full pathname to this custom rule set.
If custom rules are being used to specify semantics not covered by the included standard rulesets, then some care must be taken for the following reasons:
* Recursive rules can lead to an explosion in the number of inferred statements;
* Rules with unbound variables in the head cause new blank nodes to be created -- the inferred statements can never be retracted and can cause other rules to fire.
h2. Long transitive chains
SwiftGraphDB SwiftOWLIM version 2.9 contained a special optimisation that prevents the materialisation of inferred statements as the a result of transitive chains. Instead, these inferences were computed during query answering. However, such an optimisation is NOT available in GraphDB-SE due to the nature of the indexing structures. Therefore, GraphDB-SE will attempt attempts to materialise all inferred statements at load time. When a transitive chain is long, then this can cause a very large number of inferences to be made. For example, for a chain of N rdfs:subClassOf relationships, GraphDB-SE will infer infers (and materialises) a further (N{^}2^\-N)/2 statements. If the relationship is also symmetric, e.g. in a family ontology with a predicate such as relatedTo, then there will be N{^}2^\-N inferred statements.
Administrators should therefore take great care when managing datasets that have long chains of transitive relationships. If performance becomes a problem then it may be necessary to:
# Modify the schema, either by removing the symmetry of certain transitive relationships or chaining the transitive nature of certain properties altogether;
# Reducing the complexity of inference by choosing a less sophisticated ruleset.
h1. Strategy
The life-cycle of a repository instance typically starts with the initial loading of datasets, followed by the processing of queries and updates. The loading of a large dataset can take a long time - -- 12 hours for a billion statements with inference is not unusual. Therefore, it is often useful to use a different configuration during loading than during normal use. Furthermore, if a dataset is frequently loaded, because it changes gradually over time, then the loading configuration can be evolved as the administrator gets more familiar with the behaviour of GraphDB-SE with this dataset. Many properties of the dataset only become apparent after the initial load (such as the number of unique entities) and this information can be used to optimise the loading step the next time round or to improve the normal use configuration.
h2. Dataset loading
A typical initialisation life-cycle would be is like this:
# Configure a repository for best loading performance with many parameters estimated;
# Load data;
# Examine dataset properties;
# Refine loading configuration;
# Reload data and measure improvement.
Unless the repository needs to answer queries during the initialisation phase, the repository can be configured with the minimum number of options and indices, with a large portion of the available heap space given over to the cache memory:

The optional indices can be built at a later time when the repository is used for query answering. The details of all optional indices, caches and optimisations have been covered previously in this document. Some experimentation is required using typical query patterns from the user environment.
The size of the data structures used to index entities is directly related to the number of unique entities in the loaded dataset. These data structures are always kept in memory. In order to get an upper bound on the number of unique entities loaded and to find the actual amount of RAM used to index them, some knowledge of the contents of the storage folder are useful.
The size of the data structures used to index entities is directly related to the number of unique entities in the loaded dataset. These data structures are always kept in memory. In order to get an upper bound on the number of unique entities loaded and to find the actual amount of RAM used to index them, some knowledge of the contents of the storage folder are useful.
Briefly, the total amount of memory needed to index entities is equal to the sum of the sizes of the files {{entities.index}} and {{entities.hash}}. This value can be used to determine how much memory is used and therefore how to divide the remaining between the cache-memory, etc.
An upper bound on the number of unique entities is given by the size of {{entities.hash}} divided by 12 (memory is allocated in pages and therefore the last page will likely not be full).
The file {{entities.index}} is used to look up entries in the file {{entities.hash}} and its size is equal to the value of the {{entity-index-size}} parameter multiplied by 4. Therefore the {{entity-index-size}} parameter has less to do with efficient use of memory and more to do with the performance of entity indexing and lookup. The larger this value, the less collisions occur in the {{entities.hash}} table. A reasonable size for this parameter is at least half the number of unique entities. However, the size of this data structure is never changed once the repository is created, so this knowledge can only be used to adjust this value for the next clean load of the dataset with a new (empty) repository.
The following parameters can be adjusted:
|| parameter || Comment ||
|| parameter || Comment ||

{noformat}
If any one of these is missed out, it will be calculated. If two or more are unspecified, then the remaining cache memory is divided evenly between them.
Furthermore, the inference semantics can be adjusted by choosing a different rule set. However, this will require a reload of the whole repository, otherwise some inferences can remain when they should not.