OWLIM is a high-performance semantic repository, implemented in Java and packaged as a Storage and Inference Layer (SAIL) for the Sesame RDF database. This section describes the various editions of OWLIM.
OWLIM is based on Ontotexts's Triple Reasoning and Rule Entailment Engine (TRREE) – a native RDF rule-entailment engine. The supported semantics can be configured through the definition of rule-sets. The most expressive pre-defined rule-set combines unconstrained RDFS and OWL-Lite. Custom rule-sets allow tuning for optimal performance and expressivity. OWLIM supports RDFS (section 3.1.2), OWL DLP (section 126.96.36.199), OWL Horst (section 188.8.131.52), most of OWL Lite (section 184.108.40.206) and OWL2 RL (section 220.127.116.11).
One of the main advantages of OWLIM-Lite is the in-memory reasoning implementation: the full content of the repository is loaded and maintained in main memory, which allows for efficient retrieval and query answering. Although the reasoning is handled in-memory, the OWLIM-Lite SAIL offers a relatively comprehensive persistence and backup strategy.
The limitations of OWLIM are related to its reasoning strategy. In general, the expressivity of the language supported cannot be extended in the Description Logic direction, because the semantics must be able to be captured in (Horn) rules. The total materialisation strategy has drawbacks when changes to the explicitly asserted statements occur frequently. For expressive semantics and certain ontologies, the number of implicit statements can grow quickly with the expected degradation in performance. OWLIM-SE has a number of optimisations to reduce this problem, e.g. special handling of owl:sameAs. Removing explicit statements can adversely affect performance if the full closure needs to be recomputed. Again, OWLIM-SE uses special techniques to avoid this situation. Another limitation of OWLIM-Lite is that the volume of data it can process is limited by the size of the computer's main memory Considering currently available commodity hardware, OWLIM-Lite can handle millions of statements on desktop machines and above ten million on entry-level servers..
OWLIM version 3.X is packaged as a Storage and Inference Layer (SAIL) for Sesame version 2.x and makes extensive use of the features and infrastructure of Sesame, especially the RDF model, RDF parsers and query engines.
Figure 5 - OWLIM Usage and Relationship to Sesame and ORDI
OWLIM implements the Sesame SAIL interface so that it can be integrated with the rest of the Sesame framework, e.g. the query engines and the web UI. A user application can be designed to use OWLIM directly through the Sesame SAIL API or via the higher-level functional interfaces. When an OWLIM repository is exposed using the Sesame HTTP Server, users can manage the repository through the Sesame Workbench Web application, or with other tools integrated with Sesame, e.g. ontology editors like Protégé and TopBraid Composer.
OWLIM is implemented on top of the TRREE engine. TRREE  stands for 'Triple Reasoning and Rule Entailment Engine'. The TRREE performs reasoning based on forward-chaining of entailment rules over RDF triple patterns with variables. TRREE's reasoning strategy is total materialisation, see section 3.1.7, although various optimisations are used as described in the following sections.
The semantics used is based on R-entailment  with the following differences:
Further details of the rule language can be found in the corresponding user guides.
The edition of TRREE used in OWLIM-Lite is referred to as 'SwiftTRREE' and performs reasoning and query evaluation in-memory. The edition of TRREE used in OWLIM-SE is referred to as 'BigTRREE' and utilises data structures backed by the file-system. These data structures are organized to allow query optimizations that dramatically improve performance with large datasets, e.g. with one of the standard tests OWLIM-SE evaluates queries against 7 million statements three times faster than OWLIM-Lite, although it takes between two and three times more time to initially load the data.
The two OWLIM editions – OWLIM-Lite and OWLIM-SE – are identical in terms of usage and integration except for a few minor differences in some configuration parameters. The editions differ in which version of the TRREE engine they are based upon, but share the same inference and semantics (rule-compiler, etc).
OWLIM-SE is suitable for massive volumes of data and heavy query loads. It is designed as an enterprise-grade semantic repository system. It features:
Table 2 - Comparison between OWLIM-Lite and OWLIM-SE
OWLIM offers several predefined semantics by way of standard rule sets (files), but can also be configured to use custom rule sets with semantics better tuned to the particular domain. The required semantics can be specified through the ruleset for each specific repository instance. Applications, which do not need the complexity of the most expressive supported semantics, can choose one of the less complex, which will result in faster inference.
The pre-defined rule-sets are layered such that each one extends the preceding one. The following list is ordered by increasing expressivity:
OWLIM has an internal rule compiler that can be used to configure the TRREE with a custom set of inference rules and axioms. The user may define a custom rule-set in a *.pie file (e.g. MySemantics.pie). The easiest way to do this is to start modifying one of the .pie files that were used to build the precompiled rule-sets – all pre-defined .pie files are included in the distribution. The syntax of the .pie files is easy to follow.
Regarding OWL compliance, OWLIM supports several OWL like dialects: OWL Horst  (owl-horst), OWL Max (owl-max) that covers most of OWL-Lite and RDFS, OWL2 QL (owl2-ql) and OWL2 RL (owl2-rl).
With the owl-max rule-set OWLIM supports the following semantics:
The differences between OWL Horst , and the OWL dialects supported by OWLIM (owl-horst and owl-max) can be summarized as follows:
Even though the concrete rules pre-defined in OWLIM differ from those defined in OWL Horst, the complexity and decidability results reported for R-entailment are relevant for TRREE and OWLIM. To put it more precisely, the rules in the owl-horst rule-set, do not introduce new B-Nodes, which means that R-entailment with respect to them takes polynomial time. In KR terms, this means that the owl-horst inference within OWLIM is tractable.
The correctness of the support for OWL semantics (for those primitives which are supported) is checked against the normative Positive- and Negative-entailment OWL test cases . These tests are provided in the OWLIM distribution and documented in the OWLIM user guides.
Skip to end of metadata Go to start of metadata