compared with
Current by Vladimir Alexiev
on Jan 16, 2013 12:40.

Key
This line was removed.
This word was removed. This word was added.
This line was added.

Changes (21)

View Page History
{toc}

h1. AnnoCultor
[AnnoCultor|http://annocultor.eu/] started out as "Just Code it in Java", Mitac's favored approach :-) It is a Java application used by VU for the eCulture Pilot. [Papers|http://annocultor.eu/pubs.html] (local copies):
- [Data Migration and Ingestion Tools^Porting Cultural Repositories to the Semantic Web (2008).pdf]: 12p. Shows frequency of use and distribution of conversion Rules (methods) by group and dataset (p.7). Has some estimates on effort (p.10 sec 4.4).
- [Data Migration and Ingestion Tools^Semantic Excavation of the City of Books (2007).pdf]: 8p
- [Data Migration and Ingestion Tools^Thesaurus and Metadata Alignment for a Semantic E-Culture Application (2007).pdf]: 2p

(from Vlado)
AnnoCultor has grown into a fully-fledged conversion framework that allows converting databases and XML files to RDF, and semantically tag them with links to vocabularies, to be published on Linked Data and the Semantic Web. Consists of:
- AnnoCultor Converter: converts SQL databases, XML files, and SPARQL datasets to RDF. Converters are written in XML in a simple declarative way, and common XML editing skills are sufficient to write one.
- AnnoCultor Tagger: allows assigning semantic tags (terms from existing vocabularies) to your data. Recently used to semantically tag nearly 7 million records from the Europeana collections with location data
- AnnoCultor Time Ontology: a vocabulary of time periods: milleniums, centuries, half centuries, quarters, decades, and years. It also includes historical periods, like middle ages.
(Last time I looked, there was no viable time periods ontology, maybe that's changed)

h1. CRM Mapping
Source:
- [http://annocultor.svn.sourceforge.net/viewvc/annocultor/]
- it's a bit unclear which is the latest release..
- [http://annocultor.svn.sourceforge.net/viewvc/annocultor/trunk/src/main/java/eu/annocultor/converters/time/OntologyToHtmlGenerator.java] is updated most recently (7 weeks)
- most of the other files are a year old

- got from [crm_mappings|http://www.cidoc-crm.org/crm_mappings.html], [data-transformations|http://www.cidoc-crm.org/data-transformations.html], [tools|http://www.cidoc-crm.org/tools.html].
- tool versions (please note that older releases may have stuff that is not in earlier releases\!)
-- [CidocXML2RDFv6|http://www.cidoc-crm.org/downloads/CidocXML2RDFv6.rar] (Sep 2011): includes CRMdig ontology for Digitalization (3D COFORM)
-- [MappingTool (v 1.1)|http://www.cidoc-crm.org/downloads/MappingTool(XML2RDF-DataTransformation)(v%201.1).zip] (Oct 2010): includes GUI tool for mapping, generates XML of mapping(?) , implements the mapping process(?)
-- [mapping_tool_4_12_02|http://www.cidoc-crm.org/docs/mapping_tool_4_12_02.zip] (Apr 2002)
- saved to [\\ontonas\all-onto\Projects\culture\mapping\CRM-mapping]
- [Data Conversion and IngestionTools^Mapping a Data Structure to the CIDOC Conceptual Reference Model (2002).pdf]: presentation
- [Data Conversion and IngestionTools^Mapping Language for Information Integration (FORTH TR385 Dec2006).pdf]: paper describing the process
It implements the following conversion Rules:
- Create constant
- Rename resource property
- Rename literal property
- Replace value
- Sequence
- Lookup person
- Lookup place
- Lookup term
- Facet rename property
- Batch
- Use value of other path
- Use other subject

h1. CRM Mapping

An approach by Doerr and company for mapping anything to CRM.
This is all about slicing, dicing and combining source paths to target paths.
!CRM-mapping-path-slicing-and-dicing.png|width=600!
Got from CRM site: [crm_mappings|http://www.cidoc-crm.org/crm_mappings.html], [tools|http://www.cidoc-crm.org/tools.html].

Papers:
- [Data Migration and Ingestion Tools^Mapping format for data structures to CRM (2001).pdf]: older paper
- [Data Migration and Ingestion Tools^Mapping a Data Structure to the CIDOC Conceptual Reference Model (2002).pdf]: presentation
- [Data Migration and Ingestion Tools^Mapping Language for Information Integration (FORTH TR385 Dec2006).pdf]: paper describing the process
Proposes a conceptual structure for describing mappings, in the form of XML schema:
!CRM-mapping-XML-schema.png|width=600!

Tool versions (please note that older releases may have stuff that is not in earlier releases\!)
- Saved to [\\ontonas\all-onto\Projects\culture\mapping\CRM-mapping]
- [CidocXML2RDFv6|http://www.cidoc-crm.org/downloads/CidocXML2RDFv6.rar] (Sep 2011): includes CRMdig ontology for Digitalization (3D COFORM)
- [MappingTool (v 1.1)|http://www.cidoc-crm.org/downloads/MappingTool(XML2RDF-DataTransformation)(v%201.1).zip] (Oct 2010): includes GUI tool for mapping (!) , generates XML of mapping (?) , implements the mapping process (?)
- [mapping_tool_4_12_02|http://www.cidoc-crm.org/docs/mapping_tool_4_12_02.zip] (Apr 2002)


h1. TalenD

Open source ETL framework.
- Used extensively by Onto's LifeSci group: see [LIFESKIM:Talend] intro, [Tutorial|LIFESKIM:Tutorial - Talend Semantic ETL 0.2]
- Used extensively by Onto's LifeSci group. They also develop custom components for RDF output, semantic annotation, etc
- Proposed for use by FP7 SME [CG:Linked [Bids:Linked City]
- Used by UC Berkeley for a [CollectionSpace|CollectionSpace] deployment

Vaso and Deyan Peychev swear by it. Includes:
- GUI for creating and Framework for executing complex ETL flows, with exception catching, document routing, etc
- GUI for creating mappings, eg from XML to RDF.
I'll talk to Deyan on Sep 30 whether the TalenD mapper can implement CRM path slicing & dicing

h1. MINT
Resources:
- [Talend:] space (now open to SSL and SirmaITT)
- [LIFESKIM:Talend] intro, [Tutorial|Talend:Tutorial - Talend Semantic ETL 0.2] (these may be already merged into the above space)

Data conversion toolkit used by FP7 Athena to contribute to Europeana

h1. MINT
[MINT|http://mint.image.ece.ntua.gr/redmine/projects/mint/wiki] is a data conversion toolkit used by numerous projects (Athena, Judaica etc) to contribute to Europeana.
Nice graphical mapper, nice demo movie etc

h1. Delving

- Applied under FP7 PSP (call finished Jul 31), we'll see if it gets funding.
- Found about it from the CIDOC conference that's just over: [http://www.brukenthalmuseum.eu/cidoc/uk/file/abstracts.pdf] p.11

h1. Karma
Karma is a Data Integration Tool by USC. It enables users to quickly and easily integrate data from a variety of data sources including databases, spreadsheets, delimited text files, XML, JSON, KML and Web APIs to RDF.
- [http://www.isi.edu/integration/karma/]: the Karma website is very informative, including papers and videos.
It describes applications to biosciences, cultural heritage (Smithsonian), geo mashups, web APIs (eg Twitter).
- It includes a nice graphical tool for creating the semantic mapping of data, and a nice informative way of presenting it, eg:

!Karma-biomedical-mapping.png|width=1000!

- Builds the mapping semi-automatically: uses field pattern learning (based on Conditional Random Fields) and ontology graph traversal to help the user construct the mapping
- Property domain and range definitions are very important for Karma's work. I think that CRM is a bit too abstract to be an appropriate target for Karma, but it would be interesting to try
- Stronger semantic capabilties than Google Refine, but weaker (or no) data munging capabilities (see review below)
- I wonder if it an be integrated with the RDB2RDF W3C standard, and maybe with Ultrawrap.
- Karma is a data structure mapping tool, not an individual (term) matching tool. The latest application (see below) includes term matching, but no tool support

h2. Review of Karma Application to the Museum Domain
Recently Karma has been applied to the Museum Domain (for the Smithsonian museum). A nice infographic:
!Karma-Smithsonian.png|width=900!

Dominic got a preprint:
[^Connecting the Smithsonian American Art Museum to the Linked Data Cloud (ESWC 2013).pdf]

Here's a brief review of that paper by Vlado:
- Smithsonian's Gallery Systems TMS installation has 100 tables, but only 8 tables are mapped: those that drive the museum Web site. So it's NOT a complex mapping task
- Not a large collection: 41k objects, 8k authors, 44k total terms
- Map to own ontology based on EDM (not a very complex model).
-- Why did you need your own ontology? You can attach extra properties to EDM classes, without introducing subclasses.
-- Does not map to full EDM representation, eg Proxies are missing (see Fig.1)
-- "EDM and CIDOC CRM: both are large and complex ontologies, but neither fully covers the data that we need to publish": I see two inaccuracies here:
--- EDM is quite simpler than CRM (although EDM events are inspired by CRM)
--- CRM is certainly adequate to represent all of the info. (Note: "constitutent" means crm:Agent, so saam:constituentId would be mapped to an Agent_Appellation)
- Also use these ontologies:
-- SKOS for classication of artworks, *artist* and *place* names
-- RDAGr2 for biographical (same as Josh)
-- schema.org for places (why not geonames?)
- "in the complete SAAM ontology there are 407 classes, 105 data properties, 229 object properties": why so many? Fig.1 depicts only a few, you wouldn't need so many to map 8 tables, and that's a lot more than CRM
-- Ok, I think I can guess the reason. That's the sum of entities (classes and properties) in all used ontologies. But the particular mapping uses only a few. In fact it's typical in an ontology engineering task that you'd bring in a large number of entities, but use relatively few. So I think Karma needs a "subsetting" function so the user can let it know which entities are relevant (consider "shopping basket" in NIEM)
- "the community can benefit from guidance on vocabularies to represent data": that's true
- "Challenge 1: Data preparation... We addressed these data preparation tasks before modeling them in Karma by writing scripts in Java": yes, very often in the real world you need to split, pattern-match or concatenate. IMHO these are first-class tasks just like semantic modeling. Tool support for them is also needed, eg as Google Refine provides.
-- "Lesson 3: The data preparation/data mapping split is effective": in more complex situations the data munging depends on the meaning of other data that's already semantically mapped, therefore such split is not always easy. That's why GUI tools sometimes hit a limitation and you need to "escape" into a programming model/language
-- "RDF mapping tools (including Karma) lack the needed expressivity": languages like XQuery and XSPARQL have it
- "Lesson 4: Property domain and range definitions are important": indeed! I think that CRM is a bit too abstract to be an appropriate target for Karma, but it would be interesting to try
- "3 Linking to External Resources" describes a quite simple approach of matching people by name and life dates. It uses simple/standard comparison metrics and combination methods (I am pretty sure SILK supports these). It shows great F scores on a small set. There is no tool support.
-- maps to owl:sameAs triples. Would be interesting to hear your thoughts on the "skos:exactMatch or owl:sameAs" question and answers.semanticweb.com
- "4 Curating the RDF and the Links": PROV info is recorded about the mapping links (eg mapping confidence/score, who/when verified) and displayed in a GUI tool. SILK has a similar tool, and positive/negative examples are used for machine learning.
- "5 Related Work" needs to be elaborated, and made more objective
-- eg there's also the Polish National Digital Museum aggregation. Both LODAC and Polish use OWLIM, that's why we know about it. I'm sure there are more...
-- "we have performed significantly more linking of data than any of these previous efforts": that is not true IMHO. Check out Europeana Enrichment (part of the Europeana dataset) that maps entities from 20M CH objects to dbPedia, Geonames and a subject thesaurus. I'm not sure how comprehensive is that enrichment, but the volume is much bigger
-- "We tried to use Silk on this project, but we found it extremely difficult to write a set of matching rules that produced high quality matches": I find that hard to believe, but would be very interested to try it if it proves a weakness in SILK. If you publish the Smithsonian thesaurus data, I'll try it out
-- "\[an approach that] deals well with missing values and takes into account the discriminability of the attribute values in making a determination of the likelihood of a match": SILK is open source and has an extensible Rules language, couldn't these needs/features be added to SILK?

TODO:
- (!) read Song, D., Heflin, J.: Domain-independent entity coreference for linking ontology
instances. ACM Journal of Data and Information Quality (ACM JDIQ) (2012)
- (!) check out EverythingIsConnected.be