Skip to end of metadata
Go to start of metadata

The following section deals with the memory consumption for GATE documents, annotations and features.


All GATE document memory usage examples below result from synthetic tests over GATE 5.2.1 documents.

The numbers below are in megabytes, unless specified otherwise. They were taken on a 32bit Java 6u20 on Windows 7.

Annotations per character /Document size 10kB 100kB 1MB 10MB
0,1 0,6 7 75 735
0,2 1 14 142 >= 1200
0,5 3 33 347 >= 1200
  • Each document should contain random words with an average of 10 symbols.
  • Each annotation should be approximately 4 symbols with 4 features.
  • Annotations should have different start and end point.

"Annotations per character" in details

The amount of annotations per character in the table above may seem excessive at a glance. However, we need to estimate the GATE document size in memory in the peak of the semantic annotation process, in order to avoid running out of memory. In that peak:

  • there are Token annotations over each token that is an average 3-4 chars in size
  • there are split annotations between each sentence and Sentence annotations over them
  • there are Lookup annotations from the gazetteer
  • there are many temporary annotations, which represent intermediate results

This means that the size of the document in real situations depends on the design of the pipeline. To reduce the memory consumption, we recommend to:

  • Use a tokenizer only if necessary. The Token annotations are the primary cause of the GATE document memory consumption.
  • Clean temporary annotations as soon as possible, rather than at the end of the pipeline. Avoid keeping annotations in the document longer than needed.
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.