LACE

From UNL Wiki
Revision as of 21:39, 26 February 2013 by Martins (Talk | contribs)
Jump to: navigation, search

The main goal of the project LACE (Language Acquisition from Comparable tExts) is to build language modules out of data automatically extracted from comparable corpora. The results are expected to be incorporated in the architecture of UNL-based systems as supplementary resources for natural language disambiguation, both in analysis and generation, and will be used for improving the performance of applications in machine translation, summarization, information retrieval and semantic reasoning.

Contents

Motivation

UNL-based systems have been built upon lexical resources provided in a rather manual basis, mainly because the current technology on word sense disambiguation has not achieved yet the maturity level that would dispense the treatment by humans. The increasing availability of natural language data in digital format encourages, however, the exploration of strategies for extracting supplementary lexical information from comparable corpora, which could extend the coverage of the current resources and, in the end, may provide a less expensive alternative for populating lexical databases in the UNL framework.

The project LACE aims at compiling, replicating and extending techniques that have been widely used in statistical natural language processing, and evaluating their results in UNL-based applications. As a long term enterprise, the Project has been divided in three subsidiary projects, devoted to three different types of corpus and involving, therefore, three different extraction strategies:

  • LACEpc - To extract data from parallel corpora (proceedings from the United Nations and from the European Parliament);
  • LACEhpc - To extract data from comparable semi-parallel corpora (Wikipedia) using high-performance computing; and
  • LACEnpc - To extract data from comparable non-parallel corpora (newspapers) using linguistically-motivated models of language automatic acquisition.

LACEhpc

The project LACEhpc aims at designing and implementing efficient high-performance computing methods for extracting monolingual and multilingual resources from comparable non-parallel corpora.

Methodology

The project LACEhpc is divided in four main tasks:

  • extracting n-grams from monolingual corpora;
  • aligning n-grams in bilingual corpora;
  • building monolingual and multilingual language models;
  • minimizing and indexing the resulting databases for use in the UNL framework.

The proposal includes the adaptation and implementation of existing algorithms; the evaluation, revision and optimization of extraction and alignment methods; and studies for sustainability of the resulting techniques, especially on scalability and portability. In addition to HPC-oriented algorithms, the project is expected to deliver several different monolingual and bilingual databases, as well as aligned corpora and translation memories, which are important assets for natural language processing and fundamental resources for research in Linguistics and Computational Linguistics.

Participants

The project LACEhpc has been developed by the UNDL Foundation in collaboration with the Centre for Advanced Modelling Science (CADMOS), which includes researchers from the University of Geneva (UNIGE) and from the École Polytechnique Fédérale de Lausanne (EPFL).

  • Project Managers
    • Bastien CHOPARD (CADMOS)
    • Gilles FALQUET (UNIGE)
    • Ronaldo MARTINS (UNDL Foundation)
    • Martin RAJMAN (EPFL)
  • Participants
    • Kamal CHICK ECHIOUK (UNDL Foundation)
    • Meghdad FAHRAMAND (PhD student at UNIGE)
    • Jean-Luc FALCONE (UNIGE)
    • Jacques GUYOT (Simple Shift)

Files

Original project
Corpus (Wikipedia)
The corpus is presented in two distributions:
  • The experimental corpus contains 10K documents from 3 languages (English, French and Japanese) aligned at the document level
  • The abridged corpus contains 100K documents from 10 languages (Chinese, English, French, German, Italian, Japanese, Polish, Portuguese, Russian and Spanish) aligned at the document level
N-grams
The n-grams are presented in two different sets: continuous n-grams and discontinuous n-grams. Each set is further organized in four different subsets:
  • 0. raw data (n-grams extracted from the corpus)
  • 1. frequency filtered (n-grams whose frequency is equal or higher to the ratio tokens/types for all n-grams in the corpus)
  • 2. redundancy filtered (frequency-filtered n-grams that cannot be subsumed by any other existing n-gram
  • 3. constituency scores (the results of applying constituency scores to the redundancy-filtered n-grams)
Software