View Source

New features and significant bug-fixes/updates for the last few releases are recorded here. Full version numbers are given as:


e.g. 5.3.5928 where the major number is 5, the minor number is 3 and the build number is 5928. Releases with the same major and minor version numbers do not contain any new features. The only difference is that releases with later build numbers contain fixes for bugs discovered since the previous release. New or significantly changed features are released with a higher major or minor version number.

h1. GraphDB version 6.0-RC3 (build 7914)

h2. Fixes:

* The plugins were moved to <webapps>/openrdf-sesame/WEB-INF/classes/plugins;
* Running GraphDB under embedded Tomcat failed with NPE (because of non existing webapps/ folder).

h1. GraphDB version 6.0-RC2 (build 7892)

h2. Improvements:

* Added mini LDBC Semantic Publishing Benchmark ([]) into benchmark/ldbc-spb folder in the distribution;
* The plugins are now in <webapps>/openrdf-sesame/plugins folder. Lucene plugin is enabled by default. This could be overwritten by the \-Dregister-external-plugins option;
* Minor rearrangement of the files in the main distribution folder (all .pie files are put into rules/ subfolder, the scripts into scripts/ subfolder).

h2. Fixes:

* Fixed issue with the default/evaluation license;
* Fixed issue with the LoadRDF tool.

h1. GraphDB version 6.0 (build 7784)

GraphDB 6.0 is a re-branded Owlim 5.6 version. The differences are given in the last stable Owlim 5.4 release.

h2. Improvements:

* High Availability Cluster;
* Fast writes in SAFE Mode (OWLIM 5.5 improvement, which lead to incompatible binary formats between 5.4 and 5.5+);
* LoadRDF tool for faster bulk loading of data; speeds \~100KSt/s and above, without inference;
* Explain Plan like functionality;
* LVM-based Backup and Replication.

h2. Fixes:

* Databases created with one setting of the "entity-id-size" parameter (32 vs 40-bit) and opened with another setting, would crash in versions prior to 6.0. Now an exception is thrown and the repository is not initialized.

h1. Version 5.6 (build 7713)

h2. Improvements:

* [LVM-based Backup and Replication|OWLIM56:LVM-based Backup and Replication] \- Backup can optionally be based on the LVM Shadow Volume Copy - which makes it faster and the worker is released a few seconds after the backup is started (ported from 5.4).
* [New Cluster Test (cluster deployment and test tool)|OWLIM56:New Cluster Test (cluster deployment and test tool)] \- a tool for automated deployment and testing of clusters of various sizes. Can deploy on AWS and local instances. Supports docker format. Allows for acceptance, stress and load tests to be run on the deployed clusters. Optionally, creates Nagios configuration for the deployed cluster.
* LoadRDF tool - a tool for a faster bulk loading of data, which has been merged from 5.5 branch.
* Merged EntityPool Reverse Cache from 5.5 - speeds up large updates (100\+ statements).

h2. Fixes:

* All AcceptanceTests that were previously failing are now fixed:
** Improved communication between master and worker nodes with respect to the acceptance tests;
** Worker thread: fixed out-of-sync handling upon initialisation;
* Improved logging, fixed the skip of some stacktraces by the JVM in particular;
* Initialisation of 5.6 worker from 5.4 image now skip "entityIdSize" and InferencerCRC from

h1. Version 5.6 beta 3 (build 7659)

* cluster - empty worker initialisation;
* worker - initial update handling;
* log sync: 10s wait between idle rounds (network bandwidth optimisation);
* Tx log: initialisation bug fixed;
* update might fail when replication is in progress;
* miscellaneous bug fixes in the cluster utils (deployment/status) and proxies restart;
* detailed logging:
** replication cluster worker events;
** HTTP client stats;
** Tx log initialisation.

Known issues:
* AcceptanceTests failing: W4, M4, MW3, MW7, MW8;
* the new/experimental LVM backup/restore feature is not yet ported from 5.4 (and thus MW10 and MW11 Acceptance Tests are not implemented, because they are based on it).

h1. Version 5.6 beta 2 (build 7523)

- updated AcceptanceTests in the MastersAndWorkers section;
- replication start/wait methods improved;
- several fixes to the TxLog protocol;
- fixed replication logic to delete the Worker repo, only when the remote worker confirms the replication;
- additional sanity checks added to the Master-to-Master and Master-to-Worker synchronisation;
- improved logging, incl. "SPLITBRAIN" events logged both to logs and to JMX.

Known issues:
- some MW\* tests with the forced replication fail randomly, but rarely - related to the Proxy tool.

h1. Version 5.6 beta 1 (build 7368)

Ontotext redesigned its cluster architecture to support the case of two or more separate data centres (each with its own Master and Worker nodes), and to provide asynchronous transactions and Master fail-over. OWLIM Enterprise already supported Master-Worker clusters with Automatic Replication, Load-balancing and Transaction Logs, but in this release these components are improved. OWLIM 5.6 is based on 5.5 and inherits its write performance improvements.
* [OWLIM56:Client Fail-over Utility], which can be configured to fallback to the next master, if the first master becomes unavailable;
* Better TransactionLog support (see: [OWLIM56:Transaction Log Improvements]) - the updates are synchronised between all masters in all data centers;
* All Masters are now Read/Write;
* [OWLIM56:Smart Replication];
* Protocol backwards compatibility - the ability to upgrade the OWLIM cluster without downtime, following the OWLIM Upgrade Procedure;
* [External Plug-ins|OWLIM56:External Plug-ins] \- the plugins in OWLIM are moved into a separate plugin directory, and now can be upgraded/maintained separately.

h2. Known issues:

* Transaction consistency concern. In the new cluster, the Master responds to an update from a client as soon as the test node completes it.

In a single threaded scenario the next query can be evaluated on a node that still has either not received it or not completed it which could lead to inconsistency from the client application point of view. This deviates from the update processing in 5.4, where the response is created after the last of available nodes completes it\[OWLIM-1483\].