compared with
Current by laura.tolosi
on Dec 05, 2014 16:53.

This line was removed.
This word was removed. This word was added.
This line was added.

Changes (13)

View Page History
** Stochastic gradient ascent (fast)
* Parallelization:
** An approach to multithreaded maxent by Mann et al. (2009), ([Efficient Large-Scale Distributed Training of Conditional Maximum Entropy Models|])
* Modified objective for targeted optimization of particular Precision/Recall trade-off:
** We implemented a weighted likelihood objective that allows for optimizing a specific F_beta, for a given beta, which means that we can specify a desired Precision/ Recall trade-off. In practice, we can therefore train models that have very high Precision, or very high Recall, at the expense of the complementary measure.
** Main publication: [Dimitroff et al. Weighted maximum likelihood as a convenient shortcut to optimize the F-measure of maximum entropy classifiers, RANLP 2013|]
* Regularization:
** L1 regularization is often used in practice for sparse models and reducing overfitting. An L1-regularized maxent can also serve as feature selection procedure.
** For speed reasons, it is preferred for learning large datasets.
* Sigmoid perceptron
** We implemented a modification of the perceptron that allows for a probabilistic output
** We implemented a modification of the perceptron that allows for a probabilistic output. [Here|] is a draft of the paper.

h5. Naive Bayes

h3. Algorithms for sequence tagging (or labelling)

Sequence tagging is a typical NLP task, where classification of all components of a sequence is done at once, as opposed to splitting into many separate instances. If classified together, sequence context features ensure better performance. A classical example of sequence tagging is _part-of-speech tagging_, where the best global assignment of a set of part of speech labels is inferred.

h5. Conditional random fields (CRF)

h5. CRF
The CRF algorithm has been proposed in [Lafferty, J., McCallum, A., Pereira, F. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data, Proc. 18th International Conf. on Machine Learning. |].

h5. Perceptron

The structured perceptron, with parallelization is implemented after : McDonald et al. (2010), ([Distributed Training Strategies for the Structured Perceptron |])

h3. Algorithms for feature selection
NLP datasets are characterized by a large number of features, sometimes order of magnitudes higher than the number of training samples available. In order to avoid overfitting, feature selection can be used prior to or during model training. We have a large number of approaches to feature selection:
* Filter by Fisher test (association between feature and outcome), either keeping a percent of features, or keeping the features yielding small enough p-value.
* Filter by mutual information (between feature and outcome), either keeping a percent of features, or keeping the features yielding large enough mutual information
* A feature induction algorithm, described here: ([Tolosi et. al. 2013 A Feature Induction Algorithm with Application to Named Entity Disambiguation. RANLP 2013|])

h2. Feature Extraction module

A module for feature extraction.
A module for feature extraction: classification instances are produced automatically, as features are extracted from documents using a set of [Groovy|] rules.

h2. Edlin-Wrapper(for GATE)
Mallet-Wrapper wraps the algorithms of [Mallet|], so that they can be used in [GATE|] for multiple information extraction purposes.
The algorithms are wrapped as ProcessingResources and LanguageResources and can be applied directly in a pipeline.

h2. Document classification API(DAPI).

Currently not part of Edlin.