LACEhpc
The project LACEhpc aims at designing and implementing efficient high-performance computing methods for extracting monolingual and multilingual resources from comparable non-parallel corpora.
Contents |
Goals
The project LACEhpc is divided in four main tasks:
- extracting n-grams from monolingual corpora;
- aligning n-grams in bilingual corpora;
- building monolingual and multilingual language models;
- minimizing and indexing the resulting databases for use in the UNL framework.
The proposal includes the adaptation and implementation of existing algorithms; the evaluation, revision and optimization of extraction and alignment methods; and studies for sustainability of the resulting techniques, especially on scalability and portability.
In addition to HPC-oriented algorithms, the project is expected to deliver several different monolingual and bilingual databases, as well as aligned corpora and translation memories, which are important assets for natural language processing and fundamental resources for research in Linguistics and Computational Linguistics.
Methodology
Corpus
In order to extract the data, we have proposed the use of the Wikipedia as our corpus.
The choice for the Wikipedia derives from five main reasons:
- Relevance: Wikipedia is one of the largest reference web sites, attracting nearly 68 million visitors monthly;
- Multilinguality: Wikipedia comprises more than 15,000,000 articles in more than 270 languages, many of which are inter-related and may be used to constitute a document-aligned multilingual comparable (non-parallel) corpus;
- Comprehensiveness: Wikipedia is not constrained in domain;
- Openness: Wikipedia texts are available under the Creative Commons Attribution-Share Alike License, which would avoid copyright issues concerning the distribution and use of the derived material;
- Accessibility: Wikipedia is easily and freely downloadable.
The raw corpus is presented in two distributions at [1]:
- The experimental corpus contains 10K documents from 3 languages (English, French and Japanese) aligned at the document level
- The abridged corpus contains 100K documents from 10 languages (Chinese, English, French, German, Italian, Japanese, Polish, Portuguese, Russian and Spanish) aligned at the document level
Definitions
N-gram
In the scope of the project LACE, an n-gram is a linear structure of n strings composed entirely of alphabetical characters or hyphen (i.e., [a..zA..Z-]) isolated by blank space, punctuation marks, end of sentence, and other signs such as ([.,;:!?()"<>]). Strings containing digits or any non-alphabetical characters (such as [@_#$%/]) were ignored[1].
This linear structure can be “continuous” or “discontinuous”:
- a continuous n-gram, as exemplified above, is an invariant sequence of n immediately adjacent items, i.e., without any other items in-between;
- a discontinuous n-gram is an open pattern: it is a continuous n-gram where some items are variable, i.e., a sequence of x and y (x+y=n) items where x items come in the same position and are isolated by the same number of y in-between items. A discontinuous n-gram is valid if, for the same x, there are at least two different y, otherwise we consider it noise. A discontinuous n-gram may have one or more discontinuities, but due to the necessity of defining its external boundaries, we limit the notion of discontinuity to the internal items of an n-gram. In our notation, discontinuities are represented by the place holder "."[2] Given the precondition of external boundaries, discontinuous n-grams should meet the requirement: n>2.
In the scope of the project LACE, N-grams are considered to be "linguistically-relevant" if they are frequent, non-redundant, of a certain length, and may figure as syntactic and semantic units, according to the following criteria:
- Length ≤ 7:
- In the context of LACE, we treated both continuous and discontinuous n-grams with up to 7 items, i.e., where 1 ≤ n ≤ 7.
- Frequency:
- In the context of LACE, we considered an n-gram to be frequent in the corpus if its frequency of occurrence is equal or higher than the ratio between tokens and types, where “tokens” is the total number of n-grams in the corpus, and “types” is the number of distinct n-grams in the corpus. For instance: given a corpus with 5,000 occurrences of distinct 1,000 unigrams, a 1-gram is considered relevant if, and only if, it occurs 5 or more times.
- Redundancy:
- In the context of LACE, we intend an n-gram to be redundant if it is subsumed by any other x-gram, where x ≥ n. In that sense, the 1-gram “a” is considered unique if, and only if, there is at least one context “x a” and at least one context “a y”, where “x a” and “a y” have not been defined as an n-gram according to the criteria above concerning length and frequency. For instance, the items “Sri” and “Lanka” are not considered to be 1-grams because they cannot occur in isolation: they always appear as part of the 2-gram “Sri Lanka” (i.e., there is no context in the corpus in which we have “Sri” but not “Lanka”). The same applies for discontinuous n-grams: the sequence “a . . d” is a 4-gram if it is not subsumed by the 4-gram “a b . d”, i.e., if there is at least one “a x . d” where x ≠ b.
MWE (multiword expression)
Anchor
Anchors are
Participants
The project LACEhpc has been developed by the UNDL Foundation in collaboration with the Centre for Advanced Modelling Science (CADMOS), which includes researchers from the University of Geneva (UNIGE) and from the École Polytechnique Fédérale de Lausanne (EPFL).
- Project Managers
- Bastien CHOPARD (CADMOS)
- Gilles FALQUET (UNIGE)
- Ronaldo MARTINS (UNDL Foundation)
- Participants
- Kamal CHICK ECHIOUK (UNDL Foundation)
- Meghdad FAHRAMAND (PhD student at UNIGE)
- Jean-Luc FALCONE (UNIGE)
- Jacques GUYOT (Simple Shift)
Files
- N-grams
- The n-grams are presented in two different sets: continuous n-grams and discontinuous n-grams. Each set is further organized in four different subsets:
- 0. raw data (n-grams extracted from the corpus)
- 1. frequency filtered (n-grams whose frequency is equal or higher than the ratio between tokens/types for all n-grams in the corpus)
- 2. redundancy filtered (frequency-filtered n-grams that cannot be subsumed by any other existing frequency-filtered n-gram)
- 3. constituency scores (the results of applying constituency scores to the redundancy-filtered n-grams)
Support
The LACEhpc project is supported by a grant from the Hans Wilsdorf Foundation.
Notes
- ↑ This means that the input string “abc def g-hi jkl m1 234 nop qrs tu_vw” was said to have:
- six 1-grams (“abc”, “def”, “g-hi”, “jkl” “nop”, “qrs”)
- four 2-grams (“abc def”, “def g-hi”, “g-hi jkl”, “nop qrs”)
- two 3-grams (“abc def g-hi”, “def g-hi jkl”)
- one 4-gram (“abc def g-hi jkl”).
- ↑ In the example above, there are two discontinuous 3-grams (“abc . g-hi”, “def . jkl”) and three discontinuous 4-grams (“abc . . jkl”, “abc def . jkl”, “abc . g-hi jkl”).