View Source


h1. Getting started

h2. Installation, start-up and shutdown

h4. Deploying to an application server

The GraphDB Workbench can be deployed as a {{war}} file much like deploying any other web application into a Java application server. We recommend using Tomcat (7.x or 8.x). The {{graphdb-workbench-x.y.z.war}} file must be copied to the servlet container's {{webapps}} directory. In order to access the GraphDB Workbench, use a browser with a URL similar to the following:


Please refer to the documentation of your application server for more information on starting, stopping and deploying applications.

h4. Running as a standalone server

To run the GraphDB Workbench as a standalone server, unzip the distribution {{zip}} file and execute the {{{*}{*}}} (on Linux, MacOS and other Unix-like systems) or {{{*}startup.bat{*}}} (on Windows) script.

You can also start the Workbench by running the line:

java -jar graphdb-tomcat.jar

These methods of deployment use an embedded Tomcat server, which will deploy the {{war}} files in the {{sesame_graphdb}} directory. The GraphDB Workbench is accessed and administered though a Web browser. If it is located on the same machine, use a URL such as the following (assuming the default port number):


You can shut down the server with *ctrl-c* in the console.

h2. License

In order to run the GraphDB Workbench application, you need a valid GraphDB license. By default, there is an evaluation license included in the distribution but in case your license has expired, you need to contact Ontotext for a renewal. Once obtained, the recommended way of specifying it is with a Java system property. Just add the following parameter:


to the java process responsible for the Workbench deployment (Tomcat's {{{*}{*}}} file or the {{{*}{*}}} script). For other methods, see [How to setup a GraphDB license|GraphDB FAQ#How do I set up license files for GraphDB-SE and GraphDB-Enterprise].

h2. Checking your setup

After opening the Workbench URL, a summary page is displayed showing the versions of the various GraphDB components and license details. If you see this page, it means you installed and configured the Workbench correctly.

h1. Using the Workbench

h2. Managing locations

In order to create or see any repositories, you need to attach a location. Locations represent individual GraphDB servers and they can be local (hosted in an internal server within the Workbench) or remote (any other GraphDB installation). Locations can be attached, edited and detached from the Locations and Repositories manager accessible in the Admin menu. The steps to attach a location are:

# Go to Locations and Repositories in the Admin menu;
# Click the Attach location button;
# Enter a location:
#* For local locations, the absolute path to a directory on the machine running the Workbench;
#* For remote locations, the URL to the GraphDB web application, e.g. [].
#** (Optionally) Specify credentials for the Sesame location (user and password);
#** (Optionally) Add the JMX Connection parameters (host, port and credentials)--this allows you to monitor the resources on the remote location, do query monitoring and manage a GraphDB cluster.

The JMX endpoint is configured by specifying a host and a port. The Workbench will construct a JMX URI of the kind {{service:jmx:rmi:///jndi/rmi://<host>:<port>/jmxrmi}} and the remote process has to be configured with compatible JMX settings. For example:
{noformat}<port> -Djava.rmi.server.hostname=<host>

You can attach multiple locations but only one can be active at a given time.


h2. Managing repositories

The repository management page is accessed by clicking *Admin/Locations and Repositories* from the menu. This displays a list of available repositories and their locations as well as the permissions that the user has for each repository.

h4. Creating a repository

To create a new repository, click *Create repository*. This will display the configuration page for the new repository where a new, unique ID has to be entered. The rest of the parameters are described in the [GraphDB-SE configuration section|GraphDB-SE Configuration#Sample Configuration] of the GraphDB documentation.


Alternatively, you can use a TTL file that specifies the repository type, ID and configuration parameters. Click on the triangle at the edge of the *Create repository* button and choose *File*.

h4. Editing a repository

The parameters you specify at repository creation time, such as cache memory, can be changed at any point. Click the *edit* icon next to a repository to edit it. Note that you have restart the relevant GraphDB instance for the changes to take effect.

h4. Deleting a repository

Click the *bucket* icon to delete a repository. Once a repository is deleted, all data contained in it is irrevocably lost.

h4. Selecting a repository

You can connect to a repository by using the dropdown menu in the top right corner. This will allow you to easily change the repository while running queries as well as importing and exporting data in other views.


Another way to connect to a repository in the Locations and Repositories is by clicking the *slider* button next to a repository.


h2. Loading data into a repository

There are four ways to import data in the currently selected repository. They can be accessed from the menu by clicking *Data/Import*.

All import methods support asynchronous running of the import tasks, except for the text area import one, which is intended for a very fast and simple import.

Currently, only one import task of a type is executed at a time, while the others wait in the queue as pending.

For Local repositories, since the parsing is done by the Workbench, we support interruption and additional settings.

When the location is a remote one, we just send the data to the remote endpoint and the parsing and loading is performed there.

h3. Import settings

The settings for each import are saved so that you can use them, in case you want to re-import a file. They are:

* Base URI - Specifies the base URI against which to resolve any relative URIs found in the uploaded data. Ref [Sesame System documentation|];
* Context - If specified, imports the data into the specific context;
* Chunk size - The number of statements to commit in one chunk. If a chunk fails, the import operations are interrupted and the imported statements are not rollbacked. The default is no chunking. When there is no chunking, all statements are loaded in one transaction.
* Retry times - How many times to retry the commit, if it fails.
* Preserve BNode IDs - Assigns its own internal blank node identifiers or uses the blank node IDs it finds in the file.

!import-settings.png|border=1, width=772,!

h3. Four ways to import data

h4. Upload files and import

The limitation of this method is that it supports files of a limited size. The default is 200MB and it is controlled by the {{{*}graphdb.workbench.maxUploadSize{*}}} property. The value is in bytes ({{\-Dgraphdb.workbench.maxUploadSize=20971520}}).

Loading data from the *Local files* directly streams the file to the Sesame's statements endpoint:

# Click the icon to browse files for uploading;
# When the files appear in the table, either import a file by clicking *Import* on its line or select multiple files and click *Batch import*;
# The import settings modal will appear, just in case you want to add additional settings.



h4. Import server files

The server files import allows you to load files of arbitrary sizes. Its limitation is that the files must be put (symbolic links are supported) in a specific directory. By default, it is {{*$\{user.home\}/graphdb-import/*}}.

If you want to tweak the directory location, see the {{{*}graphdb.workbench.importDirectory{*}}} system property. The directory will be scanned recursively and all files with a semantic MIME type will be visible in the *Server files* tab.

h4. Import remote content

You can import from a URL with RDF data. Each endpoint that returns RDF data may be used.

If the URL has an extension, it is used to detect the correct data type (e.g. []).

Otherwise, you have to provide the Data Format parameter, which will be sent as Accept header to the endpoint and then to the import loader.

You can also import data from a SPARQL construct query. Again, you have to provide the Data Format.

h4. Paste and import

This is a very simple text import that sends the data to the [Repository Statements Endpoint|].

h2. Executing queries

Access the SPARQL pages from the menu by clicking *SPARQL*. The GraphDB Workbench SPARQL view integrates the [YASGUI|] query editor and has some additional features.

SPARQL {{SELECT}} and {{UPDATE}} queries are executed from the same view. The Workbench detects the query type and sends it to the correct Sesame endpoint. Some handy features are:

h5. A query area with syntax highlighting and namespace autocompletion - to add/remove namespaces go to *Data/Namespaces*

h5. Query tabs - saved in your browser local storage, so you can keep them even when switching views;

h5. Saved queries

Click the *save* icon to save a query, or the *folder* icon to access existing saved queries. Saved queries are persisted on the server running the Workbench.

h5. Include or exclude inferred statements

A >>-like icon controls the inclusion of inferred statements. When both elements of the icon are the same shade of dark colour inferred statements are included. When only the left elmenent is dark and the right one is greyed out only the explicit statements are included.

h5. Keyboard shortcuts

Use *Ctrl/Cmd + Enter* to execute a query. You can find other useful shortcuts the *keyboard shortcuts* link in the lower right corner of the SPARQL editor.

h5. Creating links to queries

Use the *link* icon to copy your query as a URL and share it. The link opens the query editor in a new tab and the query is filled there. For a longer query it might be more convenient to first save it and then get a link to the saved query by opening the saved queries list and clicking on the respective *link* icon.

h5. Downloading query results

The *Download As* button allows you to download query results in your preferred format (JSON, XML, CSV, TSV and Binary RDF for Select queries and all RDF formats for Graph query results).

h5. Various ways to view the results

Query results are shown in a table on the same page. You can order the results by column values and filter by table values.

The results can be viewed in different formats according to the type of the query and they can be used to create a Google Charts diagram.

The query results are limited to 1000, since your browser cannot handle an infinite number of results. To obtain all results, use *Download As* and select the required format for the data.

You will see the total number of results and the query execution time in the query results header.

The total number of results is obtained by an async request with a {{{*}default-graph-uri{*}}} parameter and the value {{[]}}.

!sparql.png|border=1, width=781!

h2. Exporting data

Data can be exported in several ways and formats.

h4. Exporting entire repository or individual graphs

Click *Data/Export* from the menu and decide whether you need to export the whole repository (in several different formats) or specific named graphs (in the same variety of formats). Click the appropriate format and the download will begin:


h4. Exporting query results

The SPARQL query results can also be exported from the SPARQL view with results by clicking *Download As*.

h4. Exporting resources

From the resource description page, export the RDF triples that make up the resource description to JSON, JSON-LD, RDF-XML, N3/Turtle and N-Triples:


h2. Viewing and editing resources

h4. Viewing

To view a resource in the repository, go to *Data/View resource* and enter the URI of a resource or navigate to it by clicking the SPARQL results links. Even when the resource is missing you still will be navigated to the resource view where you can create triples for it using the resource edit. &nbsp;Viewing resources provides an easy way to see triples where a given URI is the subject, predicate or object.


Create new resource





When ready, save the new resource to the repository.

h4. Editing

Once you open a resource in View resource you can also edit it. Click the edit icon next to the resource namespace and add, change or delete the properties of this resource.


You can not change or delete the inferred statements. For more information, see the [FAQ section|GraphDB FAQ#Why can't I delete some statements?] of the GraphDB documentation.


h2. Namespace management

You can view and manipulate the RDF namespaces for the active repository from the view accessible through *Data/Namespaces*. If you only have read access to the repository you cannot add or delete namespaces but only view them.


h2. Cluster management

This release introduces the new cluster manager. Click *Admin/Cluster management* to open it. The cluster manager shows nodes (masters and workers) and the connections between them in a visual manner. Workers are represented by a bee icon, while masters' icon is a person with a hat. The connections between the nodes are shown as lines in different colours according to the status of each worker. There are three basic operations that you can execute:

* Drag and drop nodes to connect them;
* Click a node to get additional information;
** For workers:
* Click a link to disconnect nodes.

The current limitations of the cluster manager are:

* A worker can be connected only to a single master;
* workers from a local location cannot be connected to remote masters;
* masters from a local location cannot be connected to remote masters;
* master-to-master connections cannot be unidirectional.

The cluster manager will continuously update the visualisation so you can use it as a tool to monitor the status of workers.

If you experience any errors or issues, please check if your attached remote locations are accessible and the JMX settings for those locations are correct.


h4. Cloning workers

You can create all your workers through the Locations and Repositories manager but since in a cluster environment all workers are usually the same (in terms of configuration), you can also use the Clone functionality. Click a worker node to view its information and then click the *Clone to another location* button. A dialog opens where you enter the ID, the title and choose the target location.

h2. Connector management

The Connector manager lets you create, view and delete GraphDB Connector instances. It provides a handy form-based editor for Connector configurations. Click *Data/Connector management* to access it.

h3. Creating connectors

To create a new Connector configuration click the *New Connector* button in the tab of the respective Connector type you want to create. Once you fill the configuration form you can either execute the create statement from the form by clicking *OK* or only view it by clicking *View SPARQL Query*. If you view the query, you can also copy it to execute manually or integrate in automation scripts.

h3. Viewing connectors

Existing Connector instances will show under *Existing connectors* (below the New Connector button). Click the name of an instance to view its configuration and SPARQL query, or click the *repair* and *delete* icons to perform those operations.


h2. Users and access management

User and access checks are disabled by default. If you want to enable them, go to *Admin/Users and Access* and click the slider above the user table.

Users and access management is under *Admin/Users and Access* from the menu. The page displays a list of users and the number of repositories they have access to. It is also possible to disable the security for the entire GraphDB Workbench instance by clicking *Disable/Enable*. When security is disabled everyone has full access to repositories and admin functionality.

From here, you can create new users, delete existing users or edit user properties, including setting their role and the read/write permission for each repository. The password can also be changed here.


User roles:

* User - a user who can read and write according to his permissions for each repository;
* Admin - a user with full access, including creating, editing, deleting users.

Since GraphDB 6.4 repository permissions can be bound to a specific location only, or to all locations ("*" in the location list) to mimic the behaviour of pre-6.4 versions.


h4. Login and default credentials

If security is enabled, the first page you will see is the login page. The default administrator account information is:

username: *admin*
password: *root*

It is highly recommended that you change the root password as soon as you log in for the first time. Click your username (admin) in the top right corner to change it.

See section [Users and access management] in the current document.


The Workbench in GraphDB 6.4 introduces the GraphDB Workbench REST API. It is a subset of the API the Workbench uses that has been made public. It can be used to automate various tasks without having to resort to opening the Workbench in a browser and doing them manually. The REST API calls fall into six major categories:

h5. Security management

Use the security management API to add, edit or remove users and thus integrate Workbench security into an existing system.

h5. Location management

Use the location management API to attach, activate, edit or detach locations.

h5. Repository management

Use the repository management API to add, edit or remove repository into any attached location. Unlike the Sesame API, you can work with multilpe remote locations from a single access point. When combined with the location management it can be used to automate the creation of multiple repositories across your network.

h5. Cluster management

Use the cluster management API to connect or disconnet workers to masters, masters to masters and to query the status of connected workers. You can also trigger a backup or restore on any master node. The advantage of using the cluster management API is not having to deal with JMX. When combined with location and repository management it can be used to automate the setup of a GraphDB Enterprise cluster.

h5. Data import

Use the data import to import data into GraphDB. You can choose between server files and a remote URL.

h5. Saved queries

Use the saved queries API to create, edit or remove saved queries. It is a convenient way to automate the creation of saved queries that are important to your project.

You can find more information about each REST API in *Admin/REST API Documentation*, as well as execute them directly from there and see the results.

!restapi.png|border=1, width=781!

h1. Configuration properties

In addition to the standard GraphDB command line parameters, the GraphDB Workbench can be controlled with the following parameters. They should be of the form {{\-Dparam=value}}

|| Parameter || Deprecated name || Default || Description ||
| {{{*}graphdb.workbench.cors.enable{*}}} | app.cors.enable | false | Enables cross-origin resource sharing. |
| {{{*}graphdb.workbench.maxConnections{*}}} | app.maxConnections | 200 | Sets the maximum number of concurrent connections to a GraphDB instance. |
| {{{*}graphdb.workbench.datadir{*}}} | app.datadir | {{*$\{user.home\}/.graphdb-workbench/*}} | Sets the directory where the workbench persistence data will be stored. |
| {{{*}graphdb.workbench.importDirectory{*}}} | impex.dir | {{*$\{user.home\}/graphdb-import/*}} | Changes the location of the file import folder. |
| {{{*}graphdb.workbench.maxUploadSize{*}}} | app.maxUploadSize | 200 MB | Maximum upload size for importing local files. The value must be in bytes. |
| {{{*}resource.language{*}}} | | 'en' (English) | Sets the default language in which to filter results displayed in the resource exploration. |
| {{{*}resource.limit{*}}} | | 100 | Sets the limit for the number of statements displayed in the resource view page. |