compared with
Current by laura.tolosi
on Dec 08, 2014 16:46.

Key
This line was removed.
This word was removed. This word was added.
This line was added.

Changes (16)

View Page History
{toc}

{attachments}

h2. Introduction

h5. SentiWordNet processing:

Terms of SentiWordNet are assigned polarity scores in dependence of their synset. Therefore, one term can occur several times, with different meanings and different polarity scores. We aggregated the scores into one, as follows:
* words with both high positive and high negative scores (in different synsets) clearly depend on the context; they were not many so we eliminated them, because otherwise sense disambiguation would have been necessary. These words were defined as follows:
min_synsets(sentiwordnet_Pos(w)-sentiwordnet_Neg(w)) < -0.5
max_synsets(sentiwordnet_Pos(w)-sentiwordnet_Neg(w)) > 0.5
* words with very similar positive and negative score in all synsets are said to be neutral. We also remove them:
max_synsets(abs(sentiwordnet_Pos(w)-sentiwordnet_Neg(w))) < 0.2
* the final score is the most polarizing difference in a synset and map it to [0,1]
*score_sentiwordnet(w) = 0.5 max_synsets(sentiwordnet_Pos(w)-sentiwordnet_Neg(w))+0.5*

h5. MPQA processing:
The dataset is annotated with positive, negative and neutral, without probabilities. We assigned:
w positive, *score_MPQA(w) = 1*
w negative, *score_MPQA(w) = 0*
w neutral, *score_MPQA(w) = 0.5*

h5. IMDB processing:

Aggregation into one score:
We obtain probabilities from counts as follows:

Score(w) = 0.4 score_SentiWordNet(w) + 0.4 score_MPQA(w) + 0.2 score_IMDB(w)
*score_IMDB = P(positive|w) = count(w in positive documents) / count(w in documents)*


h5. Aggregation into one final score:

*score(w) = 0.4 score_SentiWordNet(w) + 0.4 score_MPQA(w) + 0.2 score_IMDB(w)*

The resulting file is [attached|^Lexicon_combined.csv] to this page.

h3. Sentiment evaluation algorithms

Pipeline for document sentiment:
# Sentiment mapping
# Tokenization (+ stemming)
# Mapping to dictionary
# Sentiment evaluation: average over the scores of all mapped words

Pipeline for paragraph:
# Tokenization (+ stemming)
# Mapping to dictionary
# Paragraph identification
# Averaging scores of mapped words per paragraph

Pipeline for entity sentiment:
# Concept tagging
# Segmentation Tokenization
# Sentiment mapping
# Segmentation: using parsing, identify which tokens refer directly to the target entity (not available in the current version)
# Map tokens to the senti-dictionary
# Sentiment evaluation for the target entity:
* If segmentation is performed, average over the scores of tokens that are related to the target entity
* Otherwise, use an aggregate score that gives larger weight to the close-by tokens, rather than the remote ones (in the frame of a sentence):
*score (E) = sum_(w in sentence) score(w)/dist(w,E)*