Starting from version 6, GraphDB now includes three separate products with their own version numbers: GraphDB Engine, GraphDB Workbench and GraphDB Connectors (experimental). New features and significant bug-fixes/updates for the last few releases are recorded here. Each product's full version numbers are given as:
e.g. 5.3.5928 where the major number is 5, the minor number is 3 and the build number is 5928.
The integrated releases have their own version, e.g. 6.0-RC1.
Releases with the same major and minor version numbers do not contain any new features. The only difference is that releases with later build numbers contain fixes for bugs discovered since the previous release. New or significantly changed features are released with a higher major or minor version number.
This is an integrated release that includes:
- GraphDB Engine 6.0.8070
- GraphDB Workbench 6.0.1.RC1
- GraphDB Connectors 3.0.0.RC2
- Improvements to the HA Cluster wrt New Cluster Tests: improved intra-cluster communications, worker initialization, status reporting, improved diagnostics and logging;
- Query monitoring via JMX - the full text of the query is now visible
- Fixes for the Constraint Violation support & multiple rulesets
- Faster update speeds
- Now using GraphDB Custom NTriples/NQuads parser by default (so NTriples, NQuad formats are parsed faster than other formats)
- when a transaction is using the empty ruleset, the commit can added to all indexes in parallel. In order to use this experiment feature, add the special system statement: _:b <http://owlim.ontotext.com/owlim/useParallel> _:b in the beginning of the transacton. This makes sense for larger transactions (10K statements and above).
- OWLIM-1628 Added a fix of the issue of not being able to explore a ruleset when the empty ruleset was set initially.
- OWLIM-1626 RepositoryException in Worker is not thrown by the Master
- OWLIM-1600 Query returns no results when using FILTER and BIND(if(...)) in it.
- T-10 Implemented automatic entity pool restore procedure which can recover a truncated entity pool and removes the statements from the repo using the IDs beyond the new entity pool size
- OWLIM-1603 Owlim crashes with lock error without obvious reasons (there is no other process that might have locked the repo).
- OWLIM-1592 Queries with at least one sub-select which intersect with an ordinary block of statement patterns perform poorly because of multiple clones and transforms of the Sesame's query model to Owlim's one.
- OWLIM-1593 Fixed bug in MainQuery.clone() (when using Subselect and there are OPTIONALs)
- OWLIM-1572 Query Monitoring - show query text instead of query id
- OWLIM-1559 Fixed property path bug when same property paths are repeated in the query
- F-320 JMX: NumberOfExplicitTriples and NumberOfTriples shows -1 even though data has been written to the triple store
- OWLIM-1563 Fixed the issue with custom ruleset + disable-sameAs=true.
- OWLIM-1559 Implemented a shortcut in the MINUS operator which allows for faster calculation when the MINUS is over two subqueries with one triple pattern (which may have filters).
- the full list of changes in the latest version is available in this page: GraphDB-Workbench Release Notes
This is an integrated release that includes:
- GraphDB Engine 6.0.7914
- GraphDB Workbench 6.0.1
- GraphDB Connectors 3.0.0.RC2
- The plugins were moved to <webapps>/openrdf-sesame/WEB-INF/classes/plugins;
- Running GraphDB under embedded Tomcat failed with NPE (because of non existing webapps/ folder).
- Added mini LDBC Semantic Publishing Benchmark (http://ldbc.eu) into benchmark/ldbc-spb folder in the distribution;
- The plugins are now in <webapps>/openrdf-sesame/plugins folder. Lucene plugin is enabled by default. This could be overwritten by the -Dregister-external-plugins option;
- Minor rearrangement of the files in the main distribution folder (all .pie files are put into rules/ subfolder, the scripts into scripts/ subfolder).
- Fixed issue with the default/evaluation license;
- Fixed issue with the LoadRDF tool.
GraphDB 6.0 is a re-branded Owlim 5.6 version. The differences are given in the last stable Owlim 5.4 release.
- High Availability Cluster;
- Fast writes in SAFE Mode (OWLIM 5.5 improvement, which lead to incompatible binary formats between 5.4 and 5.5+);
- LoadRDF tool for faster bulk loading of data; speeds ~100KSt/s and above, without inference;
- Explain Plan like functionality;
- LVM-based Backup and Replication.
- Databases created with one setting of the "entity-id-size" parameter (32 vs 40-bit) and opened with another setting, would crash in versions prior to 6.0. Now an exception is thrown and the repository is not initialized.
- [LVM-based Backup and Replication] - Backup can optionally be based on the LVM Shadow Volume Copy - which makes it faster and the worker is released a few seconds after the backup is started (ported from 5.4).
- [New Cluster Test (cluster deployment and test tool)] - a tool for automated deployment and testing of clusters of various sizes. Can deploy on AWS and local instances. Supports docker format. Allows for acceptance, stress and load tests to be run on the deployed clusters. Optionally, creates Nagios configuration for the deployed cluster.
- LoadRDF tool - a tool for a faster bulk loading of data, which has been merged from 5.5 branch.
- Merged EntityPool Reverse Cache from 5.5 - speeds up large updates (100+ statements).
- All AcceptanceTests that were previously failing are now fixed:
- Improved communication between master and worker nodes with respect to the acceptance tests;
- Worker thread: fixed out-of-sync handling upon initialisation;
- Improved logging, fixed the skip of some stacktraces by the JVM in particular;
- Initialisation of 5.6 worker from 5.4 image now skip "entityIdSize" and InferencerCRC from owlim.properties.
- cluster - empty worker initialisation;
- worker - initial update handling;
- log sync: 10s wait between idle rounds (network bandwidth optimisation);
- Tx log: initialisation bug fixed;
- update might fail when replication is in progress;
- miscellaneous bug fixes in the cluster utils (deployment/status) and proxies restart;
- detailed logging:
- replication cluster worker events;
- HTTP client stats;
- Tx log initialisation.
- AcceptanceTests failing: W4, M4, MW3, MW7, MW8;
- the new/experimental LVM backup/restore feature is not yet ported from 5.4 (and thus MW10 and MW11 Acceptance Tests are not implemented, because they are based on it).
- updated AcceptanceTests in the MastersAndWorkers section;
- replication start/wait methods improved;
- several fixes to the TxLog protocol;
- fixed replication logic to delete the Worker repo, only when the remote worker confirms the replication;
- additional sanity checks added to the Master-to-Master and Master-to-Worker synchronisation;
- improved logging, incl. "SPLITBRAIN" events logged both to logs and to JMX.
- some MW* tests with the forced replication fail randomly, but rarely - related to the Proxy tool.
Ontotext redesigned its cluster architecture to support the case of two or more separate data centres (each with its own Master and Worker nodes), and to provide asynchronous transactions and Master fail-over. OWLIM Enterprise already supported Master-Worker clusters with Automatic Replication, Load-balancing and Transaction Logs, but in this release these components are improved. OWLIM 5.6 is based on 5.5 and inherits its write performance improvements.
- [OWLIM56:Client Fail-over Utility], which can be configured to fallback to the next master, if the first master becomes unavailable;
- Better TransactionLog support (see: [OWLIM56:Transaction Log Improvements]) - the updates are synchronised between all masters in all data centers;
- All Masters are now Read/Write;
- [OWLIM56:Smart Replication];
- Protocol backwards compatibility - the ability to upgrade the OWLIM cluster without downtime, following the OWLIM Upgrade Procedure;
- [External Plug-ins] - the plugins in OWLIM are moved into a separate plugin directory, and now can be upgraded/maintained separately.
- Transaction consistency concern. In the new cluster, the Master responds to an update from a client as soon as the test node completes it.
In a single threaded scenario the next query can be evaluated on a node that still has either not received it or not completed it which could lead to inconsistency from the client application point of view. This deviates from the update processing in 5.4, where the response is created after the last of available nodes completes it[OWLIM-1483].