View Source

OWLIM Frequently Asked Questions

h1. General

h5. What is OWLIM?

OWLIM is a sematic repository - a software component for storing and manipulating huge quantities of RDF data. OWLIM is packaged as a Storage and Inference Layer (SAIL) for the Sesame OpenRDF framework ([|]).

h5. What is a Semantic Repository?

A semantic repository is a software component for storing and manipulating RDF data. It is made up of three distinct components:
* An RDF database for storing, retrieving, updating and deleting RDF statements (triples)
* An inference engine that uses rules to infer 'new' knowledge from explicit statements
* A powerful query engine for accessing the explicit and implicit knowledge

h5. Where does the name "OWLIM" come from?

The name originally comes from the term "OWL In Memory" and is fitting for what became OWLIM-Lite. However, OWLIM-SE uses a transactional, index-based file-storage layer where "In Memory" is no longer appropriate. Nevertheless, the name has stuck and it is seldom that anyone ever asks where it came from...

h5. How do I use OWLIM?

OWLIM is packaged as a Storage and Inference Layer (SAIL) for the Sesame RDF framework ([|]). OWLIM can be used in two different ways:

One approach is to use it as a library, an example of which is provided in the release distribution that can be started by using 'example.cmd' in the 'getting-started' folder.

Another approach is to download the full version of Sesame and configure OWLIM-SE as a plug-in. This method uses the Sesame HTTP server hosted in Tomcat (or similar) and in this way you can use Sesame togther with OWLIM as a server application, accessed via the standard Sesame APIs.

Sesame version 2.2 onwards includes the Sesame Workbench - a convenient Web Application for managing repositories, importing/exporting RDF data, executing queries, etc. For more information please check the "doc" folder of the OWLIM-SE archive.

h5. What is the difference between OWLIM-Lite and OWLIM-SE?

OWLIM-Lite and OWLIM-SE are identical in terms of usage and integration for storing and managing RDF data. They share the same inference mechanisms and semantics (rule-compiler, etc). The different editions of OWLIM use different indexing, inference, and query evaluation implementations, which results in different performance, memory requirements, and scalability.

OWLIM-Lite is designed for medium data volumes (below 100 million statements) and for prototyping. Its key characteristics are as follows:
* reasoning and query evaluation are performed in main memory
* it employs a persistence strategy that ensures data preservation and consistency
* the loading of data, including reasoning, is extremely fast
* easy configuration

OWLIM-SE is suitable for handling massive volumes of data and very intensive querying activities. It is designed as an enterprise-grade database management system. This has been made possible through:
* file-based indices, which enable it to scale to billions of statements even on desktop machines
* special-purpose index and query optimization techniques, ensuring fast query evaluation against very large volumes of data
* optimized handling of owl:sameAs (identifier equality) to boost efficiency for data integration tasks
* efficient retraction of explicit statements and their inferences, which allows for efficient delete operations
* a range of powerful 'advanced features' including: Full text search (Node search, RDF search), ranking, selection and notifications

See [|] for more details.

h5. What kind of SPARQL conformance is supported?

All editions of OWLIM support:

* [SPARQL 1.1 Update (May 2011 draft)|]
* [SPARQL 1.1 Query (May 2011 draft)|]
* [SPARQL 1.1 protocol (January 2010 draft)|]
* [SPARQL 1.1 Federation extensions|]

The the [SPARQL 1.1 Graph Store Protocol|] will be supported in subsequent versions of OWLIM.

h1. Technical

h5. How does OWLIM-SE index triples?

There are several types of indices available, \*all\* of which apply to \*all\* triples, whether explicit or implicit. These indices are maintained automatically.

The main indexes that are always used are:
* predicate-object-subject (POS)
* predicate-subject-object (PSO)

There are other optional indices and these have advantages for specific datasets, retrieval patterns and query loads. These are switched off by default.

For some datasets, or when executing queries with triples patterns with a wild-card for the predicate, a pair of indices can be used that map from entities (subject, object) to predicate, i.e.
* subject-predicate (SP)
* object-predicate (OP)

This pair of indices are known as 'predicate lists', see enablePredicateList in the user guide.

For more efficient processing of named graphs (and triplesets), two other indexes can be used:
* predicate-context-subject-object-tripleset (pcsot)
* predicate-tripleset-subject-object-context (ptsoc)

These can be switched on using the build-pcsot and build-ptsoc parameters.

There are also several variations on full-text-search indexes for both Node Search and lucene-based RDF Search. Details of these can be found in the user guide.

h5. How much disk space does OWLIM-SE require to load my dataset?

There is no simple answer to this question, since it depends on reasoning complexity (how many inferred triples), how long the URIs are, what additional indices are used, etc. For an example, the following table shows the disk space requirement in bytes per explicit statement when loading the wordnet dataset with various OWLIM-SE configurations:

|| Configuration || Bytes per explicit statement ||
| owl2-rl \+ all optional indices | 366 |
| owl2-rl | 236
| owl-horst \+ all optional indices | 290 |
| owl-horst | 196 |
| empty \+ all optional indices | 240 |
| empty | 171 |

When planning for storage capacity based on input RDF file size, this depends not only on the OWLIM-SE configuration, but also the RDF file format used and the complexity of its contents. The following table can be used to give a rough estimate for the expansion to be expected from an input RDF file to OWLIM-SE storage requirements, e.g. when using OWL2-RL with all optional indices turned on, OWLIM-SE will need about 6.7GB of storage space to load a one gigabyte N3 file - with no inference ('empty') and no optional indices, OWLIM-SE will need about 0.7GB of storage space to load a one gigabyte Trix file. Again, these results were created with the Wordnet dataset:

| || N3 || N-Triples || RDF/XML || Trig || Trix || Turtle ||
|| owl2-rl \+ all optional indices all | 6.7 | 2.2 | 4.8 | 6.6 | 1.5 | 6.7 |
|| owl2-rl | 4.3 | 1.4 | 3.1 | 4.2 | 1.0 | 4.3 |
|| owl-horst \+ all optional indices | 5.3 | 1.7 | 3.8 | 5.2 | 1.2 | 5.3 |
|| owl-horst | 3.6 | 1.2 | 2.6 | 3.5 | 0.8 | 3.6 |
|| empty \+ all optional indices | 4.4 | 1.4 | 3.1 | 4.3 | 1.0 | 4.4 |
|| empty | 3.1 | 1.0 | 2.2 | 3.1 | 0.7 | 3.1 |

h5. How much disk space does OWLIM-SE need per statement?

Firstly, note that OWLIM-SE computes inferences as new explicit statements are committed to the repository. The number of inferred statements can be zero (when using the 'empty' rule set) or many multiples of the number of explicit statements (it depends on the chosen ruleset and the complexity of the data).

The disk space required for each statement further depends on the size of the URIs and literals, but for typical datasets around 200 bytes is required with only the default indices, up to about 300 bytes when all optional indices are turned on.

So when using the default indices, a good estimate for the amount of disk space you will need is 200 bytes per statement (explicit and inferred), i.e.
* 1 million statements => \~200 Megabytes storage
* 1 billion statements => \~200 Gigabytes storage
* 10 billion statements => \~2 Terabytes storage

h5. Can OWLIM answer queries in parallel?

Yes. Both OWLIM-Lite and OWLIM-SE can process queries concurrently.

Furthermore, when OWLIM-SE is used in a cluster configuration, the throughput of parallel query answering can be scaled (almost) linearly by adding more nodes.

h5. What kind of transaction isolation is supported?

OWLIM supports the read-committed isolation level, i.e. pending updates are not visible to other connected users until the complete update transaction has been committed. However, for efficiency reasons and unlike typical relational database behaviour, uncommitted changes are not 'visible' even using the connection that made the updates.

h5. Are solid-state drives better than hard-disk drives?

Yes. Unlike relational databases, a semantic database needs to conduct inference for inserted and deleted statements. This involves making highly unpredictable joins using statements anywhere in the indices for all new/deleted statements. Despite paging as best as possible, a large number of disk seeks can be expected and SSDs perform far better than HDDs in this task.

h5. What kind of RAID set-up is best?

RAID-0 gives good performance, but is more likely to suffer problems due to disk failure. RAID-5 is a good balance between resilience/redundancy and cost. Using SSDs, we have (little more than anecdotal) evidence that RAID-0 is fast, RAID-1 is slower, and RAID-5 is slower with less than 4 disks and abut the same as RAID-0 with 4 or more disks.

h5. Why won't Sesame start in Tomcat?

This problem will manifest itself in many ways after deploying the Sesame/OWLIM war files to Tomcat's webapps directory. If you are unable to set the server URL in the Workbench then this is an indication that the problem has occurred. It will likely be due to a permissions problem on the logging directory for the openrdf-sesame server. To check this, point your browser directly at the Sesame server with a URL similar to the following:


If you receive a stack trace containing the following:

bq. Invocation of init method failed; nested exception is Unable to create logging directory /usr/share/tomcat6/.aduna/openrdf-sesame/logs

then this indicates that Tomcat does not have write permission to its data directory (where it stores configuration, logs and actual repository data). To fix this, log in as root to the server machine and do the following:

mkdir /usr/share/tomcat6/.aduna/
chown tomcat6.tomcat6 /usr/share/tomcat6/.aduna/

Now when you use the server URL in your browser you should see the Sesame server welcome screen.

h1. Administration

h5. How do I flush the repository contents to disk without shutting down the whole repository?

One should commit a statement containing a special predicate [|], e.g.<[]> <[]> "" possibly together with other statements (as part of a single transaction). This would force the repository contents to be flushed to disk during the next commit().

This works only in OWLIM-SE through Sesame interface (i.e. not available in OWLIM-Lite, nor through ORDI framework).

h5. I am getting this exception: java.lang.NoSuchMethodError: org.apache.lucene.queryParser.QueryParser

You must add the Lucene jar file to the Java classpath (the Lucene core jar is included with the distribution). OWLIM-SE 4 is known to work properly with Lucene 3.0

h5. I am getting this exception: java.lang.NoClassDefFoundError: Could not initialize class com.infomatiq.jsi.rtree.RTreeWithCoords

The jsi, log4j, sil and trove4j jar files (included with the distribution) must be added to the classpath.

h5. Why does my repository report a different number of explicit statements when I change rule sets?

Each rule set defines both rules and some schema statements, otherwise known as axiomatic triples. These (read-only) triples are inserted in to the repository at intialisation time and count towards the total number of reported 'explicit' triples. The variation may be up to the order of hundreds depending upon the rule set.

h5. Why can't I delete some statements?

Statements that were added during repository initialisation, either because they are asserted in rule files or because they were loaded using the "imports" parameter are marked read-only. Having read-only statements (especially schema definition statements) are one way to ensure that 'smooth delete' can operate very quickly.
{note}OWLIM-SE does now allow read-only/schema statements to be modified when the repository is in a special mode. This feature will allow fast delete operations at the same time as ensuring that schemas can be changed when necessary. Full details on how to do this can be found in the [OWLIM-SE user guide|OWLIM:BigOWLIM Reasoner#Schemaupdatetransactions] .

h5. How can I retrieve my repository configurations from the Sesame SYSTEM repository?

When using a LocalRepositoryManager, Sesame will store the configuration data for repositories in its own 'SYSTEM' repository. A tomcat instance will do the same and you will see 'SYSTEM' under the list of repositories that the instance is managing. To see what configuration data is stored, connect to the SYSTEM repository and execute the following query:

PREFIX sys: <>
PREFIX sail: <>

select ?id ?type ?param ?value
where {
?rep sys:repositoryID ?id .
?rep sys:repositoryImpl ?impl .
?impl sys:repositoryType ?type .
optional {
?impl sail:sailImpl ?sail .
?sail ?param ?value .
# FILTER( ?id = "specific_repository_id" ) .
ORDER BY (?id + ?param)

This will return the repository ID and type, followed by name-value pairs of configuration data for SAIL repositories, including the SAIL type - "owlim:Sail" for OWLIM-SE and "swiftowlim:Sail" for OWLIM-Lite. OWLIM-Enterprise master nodes are not SAIL repositories and have the type "owlim:ReplicationCluster".

If you uncomment the FILTER clause you can substitute a repository id to get the configuration just for that repository.

h5. How do I change the configuration of an OWLIM Sesame repository after it has been created?

There is no easy generic way of changing the configuration - it is stored in the SYSTEM repository created and maintained by Sesame. However, OWLIM allows overriding of these parameters by specifying the parameter values as JVM options. For instance, by passing \-Dcache-memory=1g option to the JVM, OWLIM-SE will read it and use its value to override whatever was configured by the .ttl file. This is convenient for temporary set-ups that require easy and fast configuration change, e.g. for experimental purposes.

Changing the configuration in the SYSTEM repository is trickier, because the configurations are usually structured using blank node identifiers - which are always unique, so attempting to modify a statement with a blank node by using the same blank node identifier will fail. However, this can be achieved with SPARQL UPDATE using a DELETE-INSERT-WHERE command as follows:

PREFIX sys: <>
PREFIX sail: <>
PREFIX onto: <>
DELETE { GRAPH ?g {?sail ?param ?old_value } }
INSERT { GRAPH ?g {?sail ?param ?new_value } }
GRAPH ?g { ?rep sys:repositoryID ?id . }
GRAPH ?g { ?rep sys:repositoryImpl ?impl . }
GRAPH ?g { ?impl sys:repositoryType ?type . }
GRAPH ?g { ?impl sail:sailImpl ?sail . }
GRAPH ?g { ?sail ?param ?old_value . }
FILTER( ?id = "repo_id" ) .
FILTER( ?param = onto:ruleset ) .
BIND( "rdfs" AS ?new_value ) .

Modify the last three lines of the update command to specify the repository ID, the parameter and the new value. Then execute against the SYSTEM repository. In this example, where the rule-set is changed, there are other considerations (rebuilding the inferred closure), but for most other parameters that can be changed after the repository is created then this will work fine.

h5. How do I set up license files for OWLIM-SE and OWLIM-Enterprise

Both OWLIM-SE and OWLIM-Enterprise worker nodes require license files for long term use. These can be obtained from Ontotext. When purchasing OWLIM-SE, you will receive one license file. When purchasing OWLIM-Enterprise, you will receive a license file for the worker nodes. Master nodes do not require a license file. License files should be stored where they are accessible to the processes that need to read them to validate the software, i.e. Tomcat instances or application software that embeds OWLIM.

When installing OWLIM-SE or OWLIM-Enterprise worker nodes, the license file can be set in several ways:
* Set {{owlim:owlim-license}} in a Turtle configuration/template file, e.g. when using the Sesame console.
* In the 'License file' field when using the Sesame workbench (the repackaged version in the OWLIM distribution).
* Using the CATALINA_OPTS environment variable, i.e. {{-Dowlim-license=<full_path_to_license>}}, which will apply to all OWLIM-SE repositories as it overrides each configured repository's license file setting.

OWLIM-SE and OWLIM-Enterprise worker node licenses are different and will not work if used with the wrong software.

h5. How can I load a large RDF/XML file without getting an "entity expansion limit exceeded" error?

The XML parser will generate an error similar to the following:

bq. Parser has reached the entity expansion limit "64,000" set by the Application.

when it generates more than a specified number of 'entities'. The default limit for the built-in Java XML parser is 64,000, however it can be configured by using a Java system property. To increase the limit, pass the following to the JVM in which OWLIM/Sesame is running. Note that the actual value can be increased as necessary. Don't forget that if running in Tomcat then this must be passed to the Tomcat instance using the CATALINA_OPTS environment variable.


h5. How can I upgrade to a new version of OWLIM-SE without exporting and reimporting all my data?

There might be subtle differences between versions of OWLIM-SE that mean that exporting and re-importing explicit statements from the older version of the the repository is the best and safest means to upgrade. However, this can be lengthy with large databases.

Probably the fastest way to upgrade is to make a binary copy of the OWLIM storage folder and use the new version 'on top'. This will cause it to automatically upgrade the file formats (there are minor differences between versions) and from then on it should run fine with the new version.

1. Use the new version of OWLIM-SE to create an empty repository (with the right configuration, e.g. ruleset)
2. Shutdown any running OWLIM instance
3. Locate the storage folders for the current instance and the new instance
4. Delete the contents of the new storage folder (will be called something like repo_id/storage)
5. Copy all files and (sub-directories if present) from the old storage folder to the new storage folder
6. Restart the new instance

There is a good chance that it will take quite a long time to initialise as the storage files are modified, but it should be quicker than re-importing all the data.