RDF Priming is a technique that selects a subset of available statements for use as the input to query answering. It is based upon the concept of 'spreading activation' as developed in cognitive science. RDF Priming is a scalable and customisable implementation of the popular connectionist method on top of RDF graphs that allows for the "priming" of large datasets with respect to concepts relevant to the context and to the query. It is controlled using SPARQL ASK queries. This section provides an overview of the mechanism and explains the necessary SPARQL queries used to manage and set up RDF Priming.
RDF Priming Configuration
To enable RDF Priming over the repository, the repository-type configuration parameter should be set to weighted-file-repository.
The current implementation of RDF Priming does not store activation values, which means that they are only available at runtime and are lost when the repository is shutdown. However, they can be exported and imported using the special query directives shown below. Another side effect is that the activation values are global, because they stored within the shared Entity pool.
The initialization and management of the RDF Priming module is achieved by performing SPARQL ASK queries.
Controlling RDF Priming
RDF Priming is controlled using SPARQL ASK queries, which allows all the parameters and default values to be set. These queries use special system predicates, which are described below:
Used to enable or disable the RDF Priming module. The Object value of the statement pattern should be a Literal whose value is either "true" or "false"
Used to alter all the activation values for the nodes in the RDF graph by multiplying them by a factor specified as a Literal in the Object position of the Statement pattern of the query. The following example will reset all the activation values to zero by multiplying them by "0.0"
Used to trigger an Activation spreading cycle that starts from the nodes that were scheduled for activation for this round. No special values are required for the Subject or Object part of the statement pattern – blank nodes suffice
Used to set a non-default weight factor for statements with a specific predicate. The Subject of the Statement pattern is the predicate to which the new value should be set. The Object of the pattern is the new weight value as a Literal. The example query sets 0.5 as a weight factor to all the rdfs:subClassOf statements
Used to schedule the nodes specified as Subject or Object of the statement pattern for activation. Scheduling for activation can also be performed by evaluating an ASK query with variables in the body, in which case the nodes bound to the variables used in the query will be scheduled for activation. The behaviour of such an ASK query is altered, so that all the solutions are exhausted before returning the query result. This could take a long time, since LIMIT and OFFSET are not available in this case. The first example activates two nodes gossip:hasTrack and prel:hasChild and the second example activates many nodes identifying people (and their names) that have an album called "American Life".
The following URI's are used with conjuction with the <http://www.ontotext.com/owlim/RDFPriming#decayFactor> predicate to change the parameters of the RDF Priming module. In general, the names of the parameters are Subjects of the statement pattern and the new values are passed as its Object.
During activation spreading activations are accumulated in nodes and can grow indefinitely. The activationThreshold allows the user to trim those value to a certain threshold. The default value of this parameter is 1.0, which means that all values bigger than 1.0 are set to 1.0 on every iteration. This parameter is applied on every iteration of the process and guarantees that no activations larger than the parameter value will be encountered.
Is used during spreading activation to control how much a node's activation level is transferred to nodes that it affects. The following example query sets the new decayFactor to "0.55"
Sets the default activation value for all nodes in the repository. If the default activation is not preset then the default activation for all repository nodes is 0. This does not affect the activation origin nodes, whose activation values are set by using http://www.ontotext.com/owlim/RDFPriming#initialActivation
Edges in the RDF graph can be given weights that are multiplied by the source node activation in order to compute the activation that is spread across the edge to the destination node (see assignWeight). If the predicate of the edge is not given any specific weight (via assignWeight) then the edge weight is assumed to be 1/3 (one third). This default weight can be changed by using the defaultWeight parameter. Any floating point value in the range [0,1] can be used.
Is used to export activation values for a set of nodes. The values are stored in a file identified by the URL given as the Object of the statement pattern. The format of the data in the file is simply one line per URI followed by a tab character and the floating-point value of its activation value.
Is used to import activation values for a set of nodes. The values are loaded from a file identified by the URL given as the Object of the statement pattern. The format of the data in the file is simply one line per URI followed by a tab character and the floating-point value of its activation value.
Sets the initial activation value for each of the nodes from which the activation process starts. The nodes that are scheduled for activation will receive that amount at the beginning of the spreading activation process.
The following example uses data from DBPEDIA http://dbpedia.org/About and was imported into OWLIM-SE with the RDF Priming mode enabled. The management queries are evaluated through the Sesame console application for convenience. The initial step is to evaluate a demo query that retrieves all the instances of the dbpedia:V8 concept:
SELECT *
WHERE {?x <http://dbpedia.org/property/class> <http://dbpedia.org/resource/V8>. }
As can be seen, the query returns many engines from different manufacturers. The RDF Priming module can be used to reduce the number of results returned by this query by targeting the query to specific parts of the global RDF graph, i.e. the parts of the graph that have been activated.
The following text shows an example of setting up and configuring the RDF Priming module for the purpose of making the example query return a smaller set of more specific results. It is assumed that a SPARQL endpoint is available that is connected to a running repository instance.
Enable the RDF Priming module:
Adjust the Weight factors for a specific predicate so that it activates the relevant sub-set of the RDF graph, in this case the rdfs:subClassOf predicate:
The next step alters the Weight Factor of the rdf:type predicate so that it does not propagate activations to the classes from the activated instances. This is a useful technique when there are a lot of instances and a very large classification taxonomy which should not be broadly activated (as is the case with the DBpedia dataset).
If the example query is executed at this stage, it will return no results, because the RDF graph has no activated nodes at all. Therefore the next step is to activate two particular nodes, the Ford Motor Company dbpedia3:Ford_Motor_Company and one of the cars they build dbpedia3:1955_Ford, which came out of the factory with a very nice V8 engine:
As can be seen, the result set is smaller and most of the engines retrieved are made by Ford. However, there is an engine made by Jaguar which is most probably there because Ford owned Jaguar for some time in the past, so both manufacturers are somehow related to each other. This might also be the case for the other non-Ford engines returned, since BMW also owned Jaguar for some time. Of course, these remarks are a free interpretation of the results.
Finally, disable the RDF Priming module:
Nested repositories is a technique for sharing RDF data between multiple OWLIM-SE repositories. It is most useful when several logically independent repositories need to make use of a large (reference) dataset, e.g. a combination of one or more a LOD datasets such as geonames, DBpedia, MusicBrainz, etc, but where each repository adds its own specific data. This mechanism allows the data in the common repository to be logically included, or 'nested', within other repositories that extend it. RDF data in the common repository is combined with data in each child repository for inference purposes. Changes in the common repository are reflected across all child repositories and inferences are maintained to be logically consistent.
Results for queries against a child repository are computed from the contents of the child repository as well as the nested repository. The following diagram illustrates the nested repositories concept:
When two child repositories extend the same nested repository, they remain logically separate. Only changes made to the common nested repository will affect any child repositories.
Definition: A repository that is nested in to another repository (possibly into more than one other repository) is called a parent repository.
Definition: A repository that nests a parent repository is called a child repository.
Inference, indexing and queries
A child repository will ignore any value for its ruleset parameter and automatically use the same rule-set as its parent repository. Child repositories compute inferences based on the union of the explicit data stored stored in the child and parent repository. Changes to either parent or child will cause the set of inferred statements in the child to be updated. However, the child repository must be initialised (running) when updates to the parent repository take place. If this is not the case then the child can become logically inconsistent.
When a parent repository is updated then before its transaction is committed it will in turn update every connected child repository by a set of statement insert/delete operations. When a child repository is updated, any new resources are recorded in the parent's dictionary in order that the same resource in sibling child repositories will be index using the same internal identifier.
A current limitation on the implementation is that no updates using the owl:sameAs predicate are permitted
Queries executed on a child repository should perform almost as well as queries executed against a repository containing all the data (from both parent and child repositories).
Configuration
Both parent and child repositories must be deployed using Tomcat and they must deployed to the same instance on the same machine (same JVM).
Repositories that are configured to use the nesting mechanism must be created using specific Sesame SAIL types:
owlim:ParentSail is used for parent (shared) repositories
owlim:ChildSail is used for child repositories that extend parent repositories
Additionally, the following configuration parameters are also used:
owlim:id is used in the parent configuration to provide a nesting name
owlim:parent-id is used in child repository configurations to identify the parent repository
Once created, a child repository must not be reconfigured to use a different parent repository as this will lead to inconsistent data.
When setting up several OWLIM instances to run in the same Java Virtual Machine, i.e. the JVM used to host Tomcat, make sure that the configured memory settings take in to account the other repositories, e.g. if setting up 3 OWLIM instances, configure them as though they each had only one third of the total Java heap space available.
Initialisation and shutdown
The correct initialisation sequence is to start the parent repository followed by each of its children.
As long as no further updates occur, the shutdown sequence is not defined. However, it suggested that the children be shut down first followed by the parent.