This optimisation applies when the repository contains a large number of literals with language tags, and it is necessary to execute queries that filter based on language, e.g. using the following SPARQL query construct:
{{FILTER ( lang(?name) = "ES" )}}
In this situation, the {{in-memory-literal-properties}} configuration parameters can be set to {{true}}, causing the data values with language tags to be cached.
h2. Enumerating {{owl:sameAs}}
The presence of many {{owl:sameAs}} statements -- such as when using several LOD datasets and link sets -- causes an explosion in the number of inferred statements. For a simple example, if A is a city in country X, and B and C are alternative names for A, and Y an Z are alternative names for X, then the inference engine should infer: B in X, C in X, B in Y, C in Y, B in Z, C in Z also.
As described in the GraphDB-SE user guide, GraphDB-SE avoids the inferred statement explosion, caused by having many {{owl:sameAs}} statements, by grouping equivalent URIs in to a single master node, and using this for inference and statement retrieval. This is in effect a kind of backward chaining, which allows all the sound and complete statements to be computed at query time.
This optimisation can save a large amount of space for two reasons:
# A single node is used for all N URIs in an equivalence class, which avoids storing N {{owl:sameAs}} statements;
# If there are N equivalent URIs in one equivalence class, then the reasoning engine infers that all URIs in this equivalence class are equivalent to each other (and themselves), i.e. another N{^}2^ {{owl:sameAs}} statements can be avoided.
During query answering, all members of each equivalence appearing in a query are substituted to generate sound and complete query results. However, even though the mechanism to store equivalences is standard and cannot be switched off, it is possible to prevent the enumeration of equivalent classes during query answering. When using a dataset with many {{owl:sameAs}} statements, turning off the enumeration can dramatically reduce the number of _duplicated_ query results, where a single URI from each equivalence class is chosen to be representative.
To turn off the enumeration of equivalent URIs, a special pseudo-graph name is used:
{{FROM/FROM NAMED <[http://www.ontotext.com/disable-SameAs]>}}
Two different versions of a query are shown below with and without this special graph name. The queries are executed against the [factforge|http://factforge.net/] combined dataset:
{noformat}PREFIX dbpedia: <http://dbpedia.org/resource/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT *
WHERE { ?c rdfs:subClassOf dbpedia:Airport .* }
{noformat}Gives results:
{noformat}dbpedia:Air_strip
http://sw.cyc.com/concept/Mx4ruQS1AL_QQdeZXf-MIWWdng
umbel-sc:CommercialAirport
opencyc:Mx4ruQS1AL_QQdeZXf-MIWWdng
dbpedia:Jetport
dbpedia:Airstrips
dbpedia:Airport
dbpedia:Airporgt
fb:guid.9202a8c04000641f800000000004ae12
opencyc-en:CommercialAirport
{noformat}Whereas the same query with the pseudo-graph that prevents equivalence class enumeration:
{noformat}PREFIX dbpedia: <http://dbpedia.org/resource/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT *
FROM <http://www.ontotext.com/disable-SameAs>
WHERE { ?c rdfs:subClassOf dbpedia:Airport .* }
{noformat}Gives results:
{noformat}dbpedia:Air_strip
opencyc-en:CommercialAirport
{noformat}
h1. Reasoning complexity
The complexity of the rule set has a large effect on loading performance and on the overall size of the repository after loading. The complexity of the standard rule sets increases as follows:
* none (lowest complexity, best performance)
* rdfs
* rdfs-optimized
* owl-horst-optimized
* owl-horst
* owl-max-optimized
* owl-max
* owl2-ql-optimized
* owl2-ql
* owl2-rl-optimized
* owl2-rl (highest complexity, worst performance)
It should be noted that all rules affect the loading speed, even if they never actually infer any new statements. This is because as new statements are added, they are pushed in to every rule to see if anything is inferred. This can often result in many joins being computed, even though the rule never 'fires'.
h2. Custom rulesets
If better load performance is required and it is known that the dataset does not contain anything that will apply to certain rules, then they can be omitted from the ruleset. To do this, copy the appropriate '.pie' file included in the distribution and remove the unused rules. Then set the ruleset configuration parameter to the full pathname to this custom rule set.
If custom rules are being used to specify semantics not covered by the included standard rulesets, then some care must be taken for the following reasons:
* Recursive rules can lead to an explosion in the number of inferred statements;
* Rules with unbound variables in the head cause new blank nodes to be created -- the inferred statements can never be retracted and can cause other rules to fire.
h2. Long transitive chains
SwiftOWLIM version 2.9 contained a special optimisation that prevents the materialisation of inferred statements as a result of transitive chains. Instead, these inferences were computed during query answering. However, such an optimisation is NOT available in GraphDB-SE due to the nature of the indexing structures. Therefore, GraphDB-SE attempts to materialise all inferred statements at load time. When a transitive chain is long, then this can cause a very large number of inferences to be made. For example, for a chain of N rdfs:subClassOf relationships, GraphDB-SE infers (and materialises) a further (N{^}2^\-N)/2 statements. If the relationship is also symmetric, e.g. in a family ontology with a predicate such as relatedTo, then there will be N{^}2^\-N inferred statements.
Administrators should therefore take great care when managing datasets that have long chains of transitive relationships. If performance becomes a problem then it may be necessary to:
# Modify the schema, either by removing the symmetry of certain transitive relationships or chaining the transitive nature of certain properties altogether;
# Reducing the complexity of inference by choosing a less sophisticated ruleset.
h1. Strategy
The life-cycle of a repository instance typically starts with the initial loading of datasets, followed by the processing of queries and updates. The loading of a large dataset can take a long time -- 12 hours for a billion statements with inference is not unusual. Therefore, it is often useful to use a different configuration during loading than during normal use. Furthermore, if a dataset is frequently loaded, because it changes gradually over time, then the loading configuration can be evolved as the administrator gets more familiar with the behaviour of GraphDB-SE with this dataset. Many properties of the dataset only become apparent after the initial load (such as the number of unique entities) and this information can be used to optimise the loading step the next time round or to improve the normal use configuration.
h2. Dataset loading
A typical initialisation life-cycle is like this:
# Configure a repository for best loading performance with many parameters estimated;
# Load data;
# Examine dataset properties;
# Refine loading configuration;
# Reload data and measure improvement.
Unless the repository needs to answer queries during the initialisation phase, the repository can be configured with the minimum number of options and indices, with a large portion of the available heap space given over to the cache memory:
{noformat}
enablePredicateList = false (unless the dataset has a large number of predicates)
enable-context-index = false
in-memory-literal-properties = false
cache-memory = approximately 50% of total heap space (-Xmx value)
{noformat}
h2. Normal operation
The optional indices can be built at a later time when the repository is used for query answering. The details of all optional indices, caches and optimisations have been covered previously in this document. Some experimentation is required using typical query patterns from the user environment.
The size of the data structures used to index entities is directly related to the number of unique entities in the loaded dataset. These data structures are always kept in memory. In order to get an upper bound on the number of unique entities loaded and to find the actual amount of RAM used to index them, some knowledge of the contents of the storage folder are useful.
Briefly, the total amount of memory needed to index entities is equal to the sum of the sizes of the files {{entities.index}} and {{entities.hash}}. This value can be used to determine how much memory is used and therefore how to divide the remaining between the cache-memory, etc.
An upper bound on the number of unique entities is given by the size of {{entities.hash}} divided by 12 (memory is allocated in pages and therefore the last page will likely not be full).
The file {{entities.index}} is used to look up entries in the file {{entities.hash}} and its size is equal to the value of the {{entity-index-size}} parameter multiplied by 4. Therefore the {{entity-index-size}} parameter has less to do with efficient use of memory and more to do with the performance of entity indexing and lookup. The larger this value, the less collisions occur in the {{entities.hash}} table. A reasonable size for this parameter is at least half the number of unique entities. However, the size of this data structure is never changed once the repository is created, so this knowledge can only be used to adjust this value for the next clean load of the dataset with a new (empty) repository.
The following parameters can be adjusted:
|| parameter || Comment ||
| {{entity-index-size}} | set to a large enough value as described above |
| {{enablePredicateList}} | can speed up queries (and loading) |
| {{enable-context-index}} | |
| {{in-memory-literal-properties}} | |
| {{cache-memory}} | |
| {{tuple-index-memory}} | |
| {{predicate-memory}} | if predicate lists are enabled |
| {{fts-memory}} | if using Node Search |
Don't forget that:
{noformat}
cache-memory = tuple-index-memory + predicate-memory + fts-memory
{noformat}
If any one of these is missed out, it will be calculated. If two or more are unspecified, then the remaining cache memory is divided evenly between them.
Furthermore, the inference semantics can be adjusted by choosing a different rule set. However, this will require a reload of the whole repository, otherwise some inferences can remain when they should not.