ResearchSpace Image Annotation – Additional Detail for Functional Requirement

Author: Dominic Oldman

Draft 0.3 for Discussion

Date: 21 th Feb 2012

 

Note:

0.3 includes developer checklist and references to client business requirements.

1                   Introduc tion

Image Annotation is described in Annex 1 of the ResearchSpace Business Requirements & Specification v2 document. This describes the tool at a high level but includes the following components;

·        A main area in which images are displayed and can be annotated

·        The annotation text itself

·        The ability to sort annotations

·        The ability to filter annotations

·        The ability to Zoom in and out of the images (particularly high resolution images)

·        Tools for selecting regions and points for annotation.

·        Information (metadata) about the image.

Related functionality is image overlay which is described at 12.9 of the same document. Image overlay allows images to be scaled so that they can be placed on top of each other for precise overlay of different types of image of the same subject or other related images. For example, an x-ray of an object placed on top of a standard image of the same object. This would allow users to relate surface details to underlying detail which may be the subject of scholarly annotation. 

2                   User Interface

Functionality will be complemented with a user interface designed by the project designers which should be implemented and should be consistent with other designed interfaces. 

What deep zoom system? – example matrix

 

IIP Image Server

MS Seadragon

Notes

Ability to layer annotations 

 

 

 

Performance

 

 

 

Portability / Lack of Dependencies*

 

 

 

Technical Support

 

 

 

Ease of development

 

 

 

Open Source*

 

 

 

Image Preparation

 

 

 

Other benefits

 

 

 

 

*Relates to business rules

 


3                   Adherence to Client Business Requirements

( https://docs.google.com/a/researchspace.org/viewer?a=v&pid=sites&srcid=cmVzZWFyY2hzcGFjZS5vcmd8cmVzZWFyY2hzcGFjZXxneDo3OTJiM2U1OGQxNmRhNDIz )

Please refer to ResearchSpace Business Requirements & Specification, Date: May 2010, Version: 2. These elements are part of client acceptance testing.

Relevant Business rules 

Business Rule 3: Open System Standards the ResearchSpace tools should not be dependent onproprietary APIs. It should be possible to take any tool developed for use with open standard RDFdata and incorporate that tool into ResearchSpace without substantial redevelopment. Similarly, tools developed specifically for ResearchSpace should be useable with little or no modification outside the ResearchSpace environment. This business rule ensures the open nature of ResearchSpace and keeps the RDF research tool model simple and accessible.

Business Rule 4: Image Restrictions - Images will require some security so that specified images can be hidden from some users but be available to others. This reflects the practical reality that some images are more likely to be subject to conditions and restrictions. It is anticipated that standard Digital Asset Management tools can be used to control image access without offending other ResearchSpace business rules. ResearchSpace will encourage sharing of digital media wheneve possible.

Business Rule 6: Use Cases – The main ResearchSpace elements of data, collaboration, analysis, and web publication should be both integrated but also available as separate functions to encourage a wide range of project and related use.

Business Rule 7: Open Source – New software created will be released to the community freely as open source. Existing open source tools should, if possible, be utilised for ResearchSpace development.

4                   Development Check List

 

Availability - Is tool available for public web site Research tools should also be offered as web site tools or plug-ins for general internet users of the site but in a form that can be locked down through configuration, if required, to restrict access (for example read rather than write access). (See para 11.2.5.)

Are high level requirements for Image Annotation fulfilled. (See para 12.6)

Does system comply to general architecture principles ?

“As an open system, closed protocols, proprietary APIs and formats are to be avoided. Where not specifically stated in this document, the following rules should be applied. Any exceptions to the following must have a clear business case, such as that an existing component would require too much resource to comply. The ResearchSpace environment will primarily be accessed through the a web browser interface, however specialised research tools which are not web-based may additionally be adapted to access data from and contribute to ResearchSpace as required.

·        All data, components and services must be capable of being invoked over HTTP.

·        The primary mode of access is by URI resolution to a simple data URL (document) or Linked Data URI (semantic data).

·        Services are invoked wherever possible in a REST-compliant manner; if full REST compliance is not possible, then REST-like URIs must be used.

·        Complex queries in this REST-like style using standard SPARQL will be supported.

·        All metadata shall be represented in an accepted format of RDF, such as RDF/XML, Turtle or N3, in order of preference.

·        For data objects, such as documents and images, open formats such as html, jpeg and mp3 are preferred, although it is accepted that proprietary formats such as PDF and Microsoft Word may sometimes be necessary.

·        It may be permissible to invoke a service on the same directly, for example by using direct invocation via functions, but only if there exists an equivalent means via HTTP; the direct route being used solely for performance or similar reason.

See para 13.2

 

Standard Annotation Functionality

 

 

Function

Stage 3 status

Comment

1.                  

 

There should be synergy between the features available in data annotation. The look, feel and features should be consistent.

 

 

 

2.                  

 

Creating a point or region on an image as an overlay that can be used as the subject of an annotation.

 

 

 

3.                  

 

A point should be indicated using a dot or a choice of marker which might be appropriate.

 

 

 

4.                  

 

A region should be a line, square, rectangle, circle or drawn using a freehand tool. Different colors should be available for regions and points to ensure they can be seen on different images.

 

 

 

5.                  

 

The freehand tool should be able to trace from point to point and connect the first and last points. (polygon)

 

 

 

6.                  

 

Annotations should record the person who made the annotation and the date and time. This should conform to attribution requirements. (Linked to the registry).

 

 

 

7.                  

 

The tool should provide a box with the annotation point or region that allows the entry of text in context. Text should also be editable in the main “discussion/annotation” panel.

 

 

 

8.                  

 

There should be a separate panel providing the following functionality;

 

 

 

9.                  

 

The annotations should be available as a panel (discussion forum type functionality) in the same way as data annotation.

 

 

 

10.             

 

The annotations displayed in the panel should be linked to the annotation point or region in the image and vice versa. Clicking on a annotation text will direct focus to the annotation region on the image. Clicking on a annotation point or region should give focus to the annotation text in the discussion panel.

 

 

 

11.             

 

It should be possible create an annotation that is linked to a piece of text within another annotation text. i.e. create a point or region on an image, annotate and link to a substring in another annotation. See http://ada.drew.edu/dmproject/

 

 

 

12.             

 

Create links between the different text annotations against an image. i.e. highlight a substring in an annotation and link this to a substring in another text annotation.

 

 

 

13.             

 

It should be possible to annotate an annotation as part of a thread of annotations about a particular point or region.

 

 

 

14.             

 

Should be able to perform text searches through the annotation.

 

 

 

15.             

 

Should be able to search on author, date, project etc (see same options for data annotation)

 

 

 

16.             

 

The ability to filter on author, project, date, and type (and the same categories as those used on data annotation)

 

 

 

17.             

 

Filters should also be reflected in the markers and regions on the image so that filtering on text annotation against the image should be reflected in what is shown on the image itself.

 

 

 

18.             

 

Should be able to toggle the text annotation on the image itself so that the text can be seen against the marker or region itself. This should be on an individual basis for each annotation or be applied across all annotations.   

 

 

 

19.             

 

The ability to link the annotation to field in a record stored on ResearchSpace. (This would require the ability to find the record through a search interface)

 

 

 

20.             

 

The ability to link the annotation to a record or document (including other image) stored on ResearchSpace. (This would require the ability to find the record through a search interface). A link should be available for others to follow. Thumb nails should be available in the annotation which is a clickable link.

 

 

 

21.             

 

The ability to link the annotation to an internal or external resource, including another image or a document.

 

 

 

22.             

 

Ability to put embedded thumbnail images into the annotation text itself (which is a clickable link).

 

 

 

23.             

 

It should be possible to take annotations from the data annotation tool. (Specific function or data basket?)

 

 

 

24.             

 

Ability to support the annotation functionality with different layers that can be turned on and off. E.g. a project specific layer or other categories.

 

 

 

25.             

 

The screen should provide the metadata information about the image.

 

 

 

26.             

 

Annotation should form part of the RDF and use Open Annotation standards.

 

 

 

27.             

 

It should be possible to embed other formats like images and sound in the annotation.

 

 

 

28.             

 

It should be possible to incorporate structured terminology from ResearchSpace terminology resources  into the annotation which are then searchable as structured data.

Example: http://dme.ait.ac.at/annotation/

 

 

 

29.             

 

There should be a full screen mode

 

 

 

30.             

 

It should be possible to create your own markers or symbol images to be used for marking an annotation.

 

 

 

31.             

 

It should be possible to include clickable links into the annotation.

 

 

 

32.             

 

It should be possible to place an external clickable link on a marker or region as a different option.

 

 

 

33.             

 

It should be possible to promote an annotation into a discussion or other collaboration post ( e.g.wiki)

 

 

34.             

 

Linking annotations to fields or records in the system should mean that annotations can take part in the why, what, who and where search scenario. A linked annotation should have the correct terminology or predicates to provide this semantic link.

 

 

 

35.             

 

Should be able to differentiate annotations using different region colours and markers.

 

 

36.             

It should be possible to superimpose a grid on the image to aid placement or annotation markers.

 

 

 

In this respect the annotation tool works in a similar way to the data annotation tool.

Zooming Annotation

37.             

 

The functionality above should be available at different zoom levels.

 

 

 

38.             

 

An image will have different degrees of zoom depending upon the resolution of the file. However, in order to support annotation at different levels these levels should use a consistent mechanism. i.e. Level 1, level 2 etc

 

The number of levels may differ depending upon the resolution of the file.

 

 

 

 

39.             

 

The user should be able to specify a view option between showing all annotations (subject to filter and the viewable area) or only the annotations that have been made at the current zoom level.

 

Annotations at all levels

Annotations at zoom level

 

 

 

40.             

 

The user should be able to create annotations (described above) at specific levels of zoom.

 

 

 

41.             

 

Clicking on an annotation in the panel will cause the screen to focus on that annotation on the image at the same zoom level at which the annotation is made.

 

 

 

42.             

 

The user should be able to use the filtering system so that only the annotations specified by the filter are viewable. (Note that this is subject to the annotation view option – see 26 above).

 

 

 

43.             

It should be possible to include arrow head on lines drawn by the line tool.

 

 

44.             

An annotation could be an image or sound file or even a video

 

 

 

 

Scaling and Overlays of Images ( . (less developed at this point).

45.             

 

The system should be able to take two images and scale one of them or both in order to provide a useful overlay of both images

 

 

46.             

 

Controls should be provided to adjust opacity and other relevant image properties and colour balances.

 

 

 

47.             

 

The process should be done through a layering system

 

 

 

48.             

 

The system should provide tools to enable the images to be scaled and matched precisely.

 

 

 

49.             

 

Annotation functionality (including zooming annotation) should be applicable to these images.

 

 

 

 

 

 


Annex 1

Example 1 – Zooming Annotation - The Arch of Constantine

http://theeyegame.com/DeepZoom/ArchOfConstantine#link22

 


Speaking Images

http://www.speakingimage.org

Layers / Overlay example. Also example of point and region tools.

 

 


Example 3 – Marking and Linking, turning off and on, annotation of an annotation

http://ada.drew.edu/dmproject/

Part funded by the Andrew Mellon Foundation

See tutorial video


Example 4 – Structure data and Zoom

http://dme.ait.ac.at/annotation/