View Source

{toc}

h2. Introduction

The aim is to evaluate sentiment polarity (Negative/Positive) at several levels of granularity:
* *document* (overall sentiment): appropriate for blog posts or technical review articles, estimates whether the author's opinion on the topic is generally positive or negative. Strong polarity means the author is very subjective.
* *paragraph* (aspect oriented): each paragraph expresses an aspect of the overall topic discussed in the document. News articles that are bound to present a balanced view on an event, are expected to alternate positive and negative aspects in the constituent paragraphs.
* *entity* (very specific target): appropriate for extracting detailed opinion on products and components, events, and others, together with aggregation over a corpus, for market analysis.

Sentiment prediction can be supervised, semi-supervised or unsupervised.

_Supervised_ approaches rely on annotated datasets. Given the strong domain specificity, it is important that a large corpus from the target domain is available. When not available, domain adaptation methods can be used, that rely on a large out-of-domain corpus and a small supplementary target-domain annotated corpus.

_Unsupervised_ methods rely on sentiment dictionaries: large lists of words with scores quantifying their polarity. Mapping to dictionary and aggregation statistics are used to evaluate sentiment in free text.

_Semi-supervised_ approaches rely on a small set of annotated texts or small polarity dictionaries, that are expanded by either bootstrap methods, or by using external knowledge-bases like Wordnet.

h2. Our approach (for English)

Our tagging services in intended for generic documents, without specified domain (at least in the early stages). Therefore we opted for an unsupervised approach. We composed a large sentiment dictionary from several open sources, as described below.

h3. Sentiment dictionary

We assembled a sentiment dictionary from three sources:
# SentiWordNet: http://tcc.itc.it/projects/ontotext/sentiwn.html (small)
# MPQA opinion corpus: http://www.cs.pitt.edu/mpqa/ (large)
# Stanford IMDB review dataset: http://ai.stanford.edu/~amaas/data/sentiment/ (very large)

From each of the above sources we extracted scores in an unique format, namely one score that is between 0 and 1, where the polarity is positive if score is close to 1, and negative, if it is close to 0. It can be expressed also as two scores, positive and negative, that sum up to 1.

h5. SentiWordNet processing:

h5. MPQA processing:

h5. IMDB processing:

Aggregation into one score:

Score(w) = 0.4 score_SentiWordNet(w) + 0.4 score_MPQA(w) + 0.2 score_IMDB(w)

h3. Sentiment evaluation algorithms

Pipeline for document sentiment:
# Sentiment mapping

Pipeline for entity sentiment:
# Concept tagging
# Segmentation
# Sentiment mapping
# Sentiment evaluation