GraphDB User Guide

Skip to end of metadata
Go to start of metadata
This documentation is NOT for the latest version of GraphDB.

Latest version - GraphDB 7.1

GraphDB Documentation

Next versions

GraphDB 6.6
GraphDB 7.0
GraphDB 7.1

Previous versions

GraphDB 6.4
GraphDB 6.3
GraphDB 6.2
GraphDB 6.0 & 6.1

[OWLIM 5.4]
[OWLIM 5.2]
[OWLIM 5.1]
[OWLIM 5.0]
[OWLIM 4.4]
[OWLIM 4.3]
[OWLIM 4.2]
[OWLIM 4.1]
[OWLIM 4.0]

Getting started

Installation, start-up and shutdown

Deploying to an application server

The GraphDB Workbench can be deployed as a war file much like deploying any other web application into a Java application server. We recommend using Tomcat (7.x or 8.x). The graphdb-workbench-x.y.z.war file must be copied to the servlet container's webapps directory. In order to access the GraphDB Workbench, use a browser with a URL similar to the following:


Please refer to the documentation of your application server for more information on starting, stopping and deploying applications.

Running as a standalone server

To run the GraphDB Workbench as a standalone server, unzip the distribution zip file and execute the (on Linux, MacOS and other Unix-like systems) or startup.bat (on Windows) script.

You can also start the Workbench by running the line:

These methods of deployment use an embedded Tomcat server, which will deploy the war files in the sesame_graphdb directory. The GraphDB Workbench is accessed and administered though a Web browser. If it is located on the same machine, use a URL such as the following (assuming the default port number):


You can shut down the server with ctrl-c in the console.


In order to run the GraphDB Workbench application, you need a valid GraphDB license. By default, there is an evaluation license included in the distribution but in case your license has expired, you need to contact Ontotext for a renewal. Once obtained, the recommended way of specifying it is with a Java system property. Just add the following parameter:


to the java process responsible for the Workbench deployment (Tomcat's file or the script). For other methods, see How to setup a GraphDB license.

Checking your setup

After opening the Workbench URL, a summary page is displayed showing the versions of the various GraphDB components and license details. If you see this page, it means you installed and configured the Workbench correctly.

Using the Workbench

Managing locations

In order to create or see any repositories, you need to attach a location. Locations represent individual GraphDB servers and they can be local (hosted in an internal server within the Workbench) or remote (any other GraphDB installation). Locations can be attached, edited and detached from the Locations and Repositories manager accessible in the Admin menu. The steps to attach a location are:

  1. Go to Locations and Repositories in the Admin menu;
  2. Click the Attach location button;
  3. Enter a location:
    • For local locations, the absolute path to a directory on the machine running the Workbench;
    • For remote locations, the URL to the GraphDB web application, e.g.
      • (Optionally) Specify credentials for the Sesame location (user and password);
      • (Optionally) Add the JMX Connection parameters (host, port and credentials)--this allows you to monitor the resources on the remote location, do query monitoring and manage a GraphDB cluster.
The JMX endpoint is configured by specifying a host and a port. The Workbench will construct a JMX URI of the kind service:jmx:rmi:///jndi/rmi://<host>:<port>/jmxrmi and the remote process has to be configured with compatible JMX settings. For example:<port> -Djava.rmi.server.hostname=<host>

You can attach multiple locations but only one can be active at a given time. The active location is always shown in the navigation bar next to a plug icon.

Note that if you use the Workbench as a SPARQL endpoint all your queries will be sent to a repository in the currently active location. This works well if you make sure no-one changes the active location. To have endpoints that are always accessible outside the Workbench we recommend using standalone Workbench and Engine installations and connecting the Workbench to the Engine over a remote location and using the Engine endpoints (i.e. not those provided by the Workbench) in any software that executes SPARQL queries.

Managing repositories

The repository management page is accessed by clicking Admin/Locations and Repositories from the menu. This displays a list of available repositories and their locations as well as the permissions that the user has for each repository.

Creating a repository

To create a new repository, click Create repository. This will display the configuration page for the new repository where a new, unique ID has to be entered. The rest of the parameters are described in the GraphDB-SE configuration section of the GraphDB documentation.

Alternatively, you can use a TTL file that specifies the repository type, ID and configuration parameters. Click on the triangle at the edge of the Create repository button and choose File.

Editing a repository

The parameters you specify at repository creation time, such as cache memory, can be changed at any point. Click the edit icon next to a repository to edit it. Note that you have restart the relevant GraphDB instance for the changes to take effect.

Deleting a repository

Click the bucket icon to delete a repository. Once a repository is deleted, all data contained in it is irrevocably lost.

Selecting a repository

You can connect to a repository by using the dropdown menu in the top right corner. This will allow you to easily change the repository while running queries as well as importing and exporting data in other views.

Another way to connect to a repository in the Locations and Repositories is by clicking the slider button next to a repository.

Loading data into a repository

There are four ways to import data in the currently selected repository. They can be accessed from the menu by clicking Data/Import.

All import methods support asynchronous running of the import tasks, except for the text area import one, which is intended for a very fast and simple import.

Currently, only one import task of a type is executed at a time, while the others wait in the queue as pending.

For Local repositories, since the parsing is done by the Workbench, we support interruption and additional settings.

When the location is a remote one, we just send the data to the remote endpoint and the parsing and loading is performed there.

File names filter is present to narrow down the list if you have many files for load. 

Import settings

The settings for each import are saved so that you can use them, in case you want to re-import a file. They are:

  • Base URI - Specifies the base URI against which to resolve any relative URIs found in the uploaded data. Ref Sesame System documentation;
  • Context - If specified, imports the data into the specific context;
  • Chunk size - The number of statements to commit in one chunk. If a chunk fails, the import operations are interrupted and the imported statements are not rollbacked. The default is no chunking. When there is no chunking, all statements are loaded in one transaction.
  • Retry times - How many times to retry the commit, if it fails.
  • Preserve BNode IDs - Assigns its own internal blank node identifiers or uses the blank node IDs it finds in the file.

Four ways to import data

Upload files and import

The limitation of this method is that it supports files of a limited size. The default is 200MB and it is controlled by the graphdb.workbench.maxUploadSize property. The value is in bytes (-Dgraphdb.workbench.maxUploadSize=20971520).

Loading data from the Local files directly streams the file to the Sesame's statements endpoint:

  1. Click the icon to browse files for uploading;
  2. When the files appear in the table, either import a file by clicking Import on its line or select multiple files and click Batch import;
  3. The import settings modal will appear, just in case you want to add additional settings.

Import server files

The server files import allows you to load files of arbitrary sizes. Its limitation is that the files must be put (symbolic links are supported) in a specific directory. By default, it is ${user.home}/graphdb-import/.

If you want to tweak the directory location, see the graphdb.workbench.importDirectory system property. The directory will be scanned recursively and all files with a semantic MIME type will be visible in the Server files tab.

Import remote content

You can import from a URL with RDF data. Each endpoint that returns RDF data may be used.

If the URL has an extension, it is used to detect the correct data type (e.g.

Otherwise, you have to provide the Data Format parameter, which will be sent as Accept header to the endpoint and then to the import loader.

You can also import data from a SPARQL construct query. Again, you have to provide the Data Format.

Paste and import

This is a very simple text import that sends the data to the Repository Statements Endpoint.

Executing queries

Access the SPARQL pages from the menu by clicking SPARQL. The GraphDB Workbench SPARQL view integrates the YASGUI query editor and has some additional features.

SPARQL SELECT and UPDATE queries are executed from the same view. The Workbench detects the query type and sends it to the correct Sesame endpoint. Some handy features are:

A query area with syntax highlighting and namespace autocompletion - to add/remove namespaces go to Data/Namespaces
Query tabs - saved in your browser local storage, so you can keep them even when switching views;
Saved queries

Click the save icon to save a query, or the folder icon to access existing saved queries. Saved queries are persisted on the server running the Workbench.

Include or exclude inferred statements

A >>-like icon controls the inclusion of inferred statements. When both elements of the icon are the same shade of dark colour inferred statements are included. When only the left element is dark and the right one is greyed out only the explicit statements are included.


SPARQL view can show only a limited number of results at once. Use pagination to navigate through all results. Each page executes the query again with query limit and offset for SELECT queries. For Graph queries (CONSTRUCT and DESCRIBE) all results and fetched by the server and only the page of interest is gathered from the results iterator and sent to the client.

Keyboard shortcuts

Use Ctrl/Cmd + Enter to execute a query. You can find other useful shortcuts the keyboard shortcuts link in the lower right corner of the SPARQL editor.

Creating links to queries

Use the link icon to copy your query as a URL and share it. The link opens the query editor in a new tab and the query is filled there. For a longer query it might be more convenient to first save it and then get a link to the saved query by opening the saved queries list and clicking on the respective link icon.

Downloading query results

The Download As button allows you to download query results in your preferred format (JSON, XML, CSV, TSV and Binary RDF for Select queries and all RDF formats for Graph query results).

Various ways to view the results

Query results are shown in a table on the same page. You can order the results by column values and filter by table values.

The results can be viewed in different formats according to the type of the query and they can be used to create a Google Charts diagram.

The query results are limited to 1000, since your browser cannot handle an infinite number of results. To obtain all results, use Download As and select the required format for the data.

You will see the total number of results and the query execution time in the query results header.

The total number of results is obtained by an async request with a default-graph-uri parameter and the value

SPARLQ editor options

Since GraphDB 6.5, the Workbench introduces support for additional viewing/editing modes in the SPARQL editor.

Horizontal and vertical mode
Use the vertical mode switch to show the editor and the results next to each other, which is particularly useful on wide screen. Click the switch again to return to horizontal mode.

Viewing results or editor only
Both in horizontal and vertical mode you can also hide the editor or the results to focus on query editing or result viewing. Click on the buttons Editor only, Editor and results or Results only to switch between the different modes.

Exporting data

Data can be exported in several ways and formats.

Exporting entire repository or individual graphs

Click Data/Export from the menu and decide whether you need to export the whole repository (in several different formats) or specific named graphs (in the same variety of formats). Click the appropriate format and the download will begin:

Exporting query results

The SPARQL query results can also be exported from the SPARQL view with results by clicking Download As.

Exporting resources

From the resource description page, export the RDF triples that make up the resource description to JSON, JSON-LD, RDF-XML, N3/Turtle and N-Triples:

Viewing and editing resources


To view a resource in the repository, go to Data/View resource and enter the URI of a resource or navigate to it by clicking the SPARQL results links. Even when the resource is missing you still will be navigated to the resource view where you can create triples for it using the resource edit.  Viewing resources provides an easy way to see triples where a given URI is the subject, predicate or object.

Create new resource

When ready, save the new resource to the repository.


Once you open a resource in View resource you can also edit it. Click the edit icon next to the resource namespace and add, change or delete the properties of this resource.

You can not change or delete the inferred statements. For more information, see the FAQ section of the GraphDB documentation.

Namespace management

You can view and manipulate the RDF namespaces for the active repository from the view accessible through Data/Namespaces. If you only have read access to the repository you cannot add or delete namespaces but only view them.

Context view

A list of the contexts (graphs) in a repository can be seen in the Contexts view available through Data/Contexts. You can use it for the following tasks:

  • a reference of available contexts in a repository (use the filter to narrow down the list if you have many contexts);
  • inspecting triples in a context by clicking on it;
  • dropping a context by clicking on the bucket icon.

Cluster management

This release introduces the new cluster manager. Click Admin/Cluster management to open it. The cluster manager shows nodes (masters and workers) and the connections between them in a visual manner. Workers are represented by a bee icon, while masters' icon is a person with a hat. The connections between the nodes are shown as lines in different colours according to the status of each worker. There are three basic operations that you can execute:

  • Drag and drop nodes to connect them;
  • Click a node to get additional information;
    • For workers:
  • Click a link to disconnect nodes.

The current limitations of the cluster manager are:

  • A worker can be connected only to a single master;
  • workers from a local location cannot be connected to remote masters;
  • masters from a local location cannot be connected to remote masters;
  • master-to-master connections cannot be unidirectional.

The cluster manager will continuously update the visualisation so you can use it as a tool to monitor the status of workers.

If you experience any errors or issues, please check if your attached remote locations are accessible and the JMX settings for those locations are correct.

Cloning workers

You can create all your workers through the Locations and Repositories manager but since in a cluster environment all workers are usually the same (in terms of configuration), you can also use the Clone functionality. Click a worker node to view its information and then click the Clone to another location button. A dialog opens where you enter the ID, the title and choose the target location.

Connector management

The Connector manager lets you create, view and delete GraphDB Connector instances. It provides a handy form-based editor for Connector configurations. Click Data/Connector management to access it.

Creating connectors

To create a new Connector configuration click the New Connector button in the tab of the respective Connector type you want to create. Once you fill the configuration form you can either execute the create statement from the form by clicking OK or only view it by clicking View SPARQL Query. If you view the query, you can also copy it to execute manually or integrate in automation scripts.

Viewing connectors

Existing Connector instances will show under Existing connectors (below the New Connector button). Click the name of an instance to view its configuration and SPARQL query, or click the repair and delete icons to perform those operations.

Users and access management

User and access checks are disabled by default. If you want to enable them, go to Admin/Users and Access and click the Security slider above the user table.

Users and access management is under Admin/Users and Access from the menu. The page displays a list of users and the number of repositories they have access to. It is also possible to disable the security for the entire GraphDB Workbench instance by clicking Disable/Enable. When security is disabled everyone has full access to repositories and admin functionality.

From here, you can create new users, delete existing users or edit user properties, including setting their role and the read/write permission for each repository. The password can also be changed here.

User roles:

  • User - a user who can read and write according to his permissions for each repository;
  • Admin - a user with full access, including creating, editing, deleting users.

Since GraphDB 6.4 repository permissions can be bound to a specific location only, or to all locations ("*" in the location list) to mimic the behaviour of pre-6.4 versions.

Login and default credentials

If security is enabled, the first page you will see is the login page. The default administrator account information is:

username: admin
password: root

It is highly recommended that you change the root password as soon as you log in for the first time. Click your username (admin) in the top right corner to change it.

See section Users and access management in the current document.

Free access

Free access is a new feature since GraphDB 6.5. It allows people to access a predefined set of functionality without having to log in. This is especially useful for providing read-only access to a repository.

You can enable free access by going to Admin/Users and Access and clicking on the Free Access slider above the user table. When you enable free access, a dialog box will open and prompt you to select the access rights for free access users. The available permissions are similar to those for authenticated users, e.g. you can provide read or read/write access to one or more repositories.

Note that you must have security enabled to use free access and the setting will not show if security is disabled.


The Workbench in GraphDB 6.4 introduces the GraphDB Workbench REST API. It is a subset of the API the Workbench uses that has been made public. It can be used to automate various tasks without having to resort to opening the Workbench in a browser and doing them manually. The REST API calls fall into six major categories:

Security management

Use the security management API to add, edit or remove users and thus integrate Workbench security into an existing system.

Location management

Use the location management API to attach, activate, edit or detach locations.

Repository management

Use the repository management API to add, edit or remove repository into any attached location. Unlike the Sesame API, you can work with multilpe remote locations from a single access point. When combined with the location management it can be used to automate the creation of multiple repositories across your network.

Cluster management

Use the cluster management API to connect or disconnet workers to masters, masters to masters and to query the status of connected workers. You can also trigger a backup or restore on any master node. The advantage of using the cluster management API is not having to deal with JMX. When combined with location and repository management it can be used to automate the setup of a GraphDB Enterprise cluster.

Data import

Use the data import to import data into GraphDB. You can choose between server files and a remote URL.

Saved queries

Use the saved queries API to create, edit or remove saved queries. It is a convenient way to automate the creation of saved queries that are important to your project.

You can find more information about each REST API in Admin/REST API Documentation, as well as execute them directly from there and see the results.

Configuration properties

In addition to the standard GraphDB command line parameters, the GraphDB Workbench can be controlled with the following parameters. They should be of the form -Dparam=value

Parameter Deprecated name Default Description
graphdb.workbench.cors.enable app.cors.enable false Enables cross-origin resource sharing.
graphdb.workbench.maxConnections app.maxConnections 200 Sets the maximum number of concurrent connections to a GraphDB instance.
graphdb.workbench.datadir app.datadir ${user.home}/.graphdb-workbench/ Sets the directory where the workbench persistence data will be stored.
graphdb.workbench.importDirectory impex.dir ${user.home}/graphdb-import/ Changes the location of the file import folder.
graphdb.workbench.maxUploadSize app.maxUploadSize 200 MB Maximum upload size for importing local files. The value must be in bytes.
resource.language   'en' (English) Sets the default language in which to filter results displayed in the resource exploration.
resource.limit   100 Sets the limit for the number of statements displayed in the resource view page.
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.