GraphDB is a high-performance semantic repository, implemented in Java and packaged as a Storage and Inference Layer (SAIL) for the Sesame RDF database. This section describes the various editions of GraphDB.
GraphDB is based on Ontotexts's Triple Reasoning and Rule Entailment Engine (TRREE) – a native RDF rule-entailment engine. The supported semantics can be configured through the definition of rule-sets. The most expressive pre-defined rule-set combines unconstrained RDFS and OWL Lite. Custom rule-sets allow tuning for optimal performance and expressivity. GraphDB supports RDFS (section 3.1.2), OWL DLP (section 22.214.171.124), OWL Horst (section 126.96.36.199), most of OWL Lite (section 188.8.131.52) and OWL2 RL (section 184.108.40.206).
One of the main advantages of GraphDB-Lite is the in-memory reasoning implementation: the full content of the repository is loaded and maintained in main memory, which allows for efficient retrieval and query answering. Although the reasoning is handled in-memory, the GraphDB-Lite SAIL offers a relatively comprehensive persistence and backup strategy.
The limitations of GraphDB are related to its reasoning strategy. In general, the expressivity of the language supported cannot be extended in the Description Logic direction, because the semantics must be able to be captured in (Horn) rules. The total materialisation strategy has drawbacks when changes to the explicitly asserted statements occur frequently. For expressive semantics and certain ontologies, the number of implicit statements can grow quickly with the expected degradation in performance. GraphDB-SE has a number of optimisations to reduce this problem, e.g. special handling of owl:sameAs. Removing explicit statements can adversely affect performance if the full closure needs to be recomputed. Again, GraphDB-SE uses special techniques to avoid this situation. Another limitation of GraphDB-Lite is that the volume of data it can process is limited by the size of the computer's main memory. Considering currently available commodity hardware, GraphDB-Lite can handle millions of statements on desktop machines and above ten millions on entry-level servers..
GraphDB version 6 is packaged as a Storage and Inference Layer (SAIL) for Sesame version 2.x and makes extensive use of the features and infrastructure of Sesame, especially the RDF model, RDF parsers and query engines.
GraphDB implements the Sesame SAIL interface so that it can be integrated with the rest of the Sesame framework, e.g. the query engines and the web UI. A user application can be designed to use GraphDB directly through the Sesame SAIL API or via the higher-level functional interfaces. When a GraphDB repository is exposed using the Sesame HTTP Server, users can manage the repository through the Sesame Workbench Web application, or with other tools integrated with Sesame, e.g. ontology editors like Protégé and TopBraid Composer.
GraphDB is implemented on top of the TRREE engine. TRREE  stands for 'Triple Reasoning and Rule Entailment Engine'. The TRREE performs reasoning based on forward-chaining of entailment rules over RDF triple patterns with variables. TRREE's reasoning strategy is total materialisation, see section 3.1.7, although various optimisations are used as described in the following sections.
The semantics used is based on R-entailment  with the following differences:
Further details of the rule language can be found in the corresponding user guides.
The edition of TRREE used in GraphDB-Lite is referred to as 'SwiftTRREE' and performs reasoning and query evaluation in-memory. The edition of TRREE used in GraphDB-SE is referred to as 'BigTRREE' and utilises data structures backed by the file-system. These data structures are organised to allow query optimisations that dramatically improve performance with large datasets, e.g. with one of the standard tests GraphDB-SE evaluates queries against 7 million statements three times faster than GraphDB-Lite, although it takes between two and three times more time to initially load the data.
The two GraphDB editions – GraphDB-Lite and GraphDB-SE – are identical in terms of usage and integration except for a few minor differences in some configuration parameters. The editions differ in which version of the TRREE engine they are based upon, but share the same inference and semantics (rule-compiler, etc).
GraphDB-SE is suitable for massive volumes of data and heavy query loads. It is designed as an enterprise-grade semantic repository system. It features:
Table 2 - Comparison between GraphDB-Lite and GraphDB-SE
GraphDB offers several predefined semantics by way of standard rule sets (files), but can also be configured to use custom rule sets with semantics better tuned to the particular domain. The required semantics can be specified through the ruleset for each specific repository instance. Applications that do not need the complexity of the most expressive supported semantics, can choose one of the less complex, which will result in faster inference.
The pre-defined rule-sets are layered such that each one extends the preceding one. The following list is ordered by increasing expressivity:
GraphDB has an internal rule compiler that can be used to configure the TRREE with a custom set of inference rules and axioms. The user may define a custom rule-set in a *.pie file (e.g. MySemantics.pie). The easiest way to do this is to start modifying one of the .pie files that were used to build the pre-compiled rule-sets – all pre-defined .pie files are included in the distribution. The syntax of the .pie files is easy to follow.
Regarding OWL compliance, GraphDB supports several OWL like dialects: OWL Horst  (owl-horst), OWL Max (owl-max) that covers most of OWL Lite and RDFS, OWL2 QL (owl2-ql) and OWL2 RL (owl2-rl).
With the owl-max rule-set GraphDB supports the following semantics:
The differences between OWL Horst , and the OWL dialects supported by GraphDB (owl-horst and owl-max) can be summarised as follows:
Even though the concrete rules pre-defined in GraphDB differ from those defined in OWL Horst, the complexity and decidability results reported for R-entailment are relevant for TRREE and GraphDB. To put it more precisely, the rules in the owl-horst rule-set, do not introduce new B-Nodes, which means that R-entailment with respect to them takes polynomial time. In KR terms, this means that the owl-horst inference within GraphDB is tractable.
The correctness of the support for OWL semantics (for those primitives that are supported) is checked against the normative Positive- and Negative-entailment OWL test cases . These tests are provided in the GraphDB distribution and documented in the GraphDB user guides.
Skip to end of metadata Go to start of metadata