Grammar

From UNL Wiki
Revision as of 19:57, 21 September 2012 by Martins (Talk | contribs)
Jump to: navigation, search

In the UNL framework, a grammar is a set of rules that are used to generate UNL out of natural language, and natural language out of UNL. Along with the UNL-NL dictionaries, they constitute the basic resource for UNLization and NLization.

Contents

Networks, Trees and Lists

Natural language sentences and UNL graphs are supposed to convey the same amount of information in different structures: whereas the former arranges data as an ordered list of words, the latter organizes it as a network. In that sense, going from natural language into UNL and from UNL into natural language is ultimately a matter of transforming lists into networks and vice-versa.

The UNL framework assumes that such transformation can be carried out progressively, i.e., through a transitional data structure: the tree, which could be used as an interface between lists and networks. Accordingly, there are seven different types of rules (LL, TT, NN, LT, TL, TN, NT), as indicated below:

  • ANALYSIS (NL-UNL)
    • LL - List Processing (list-to-list)
    • LT - Surface-Structure Formation (list-to-tree)
    • TT - Syntactic Processing (tree-to-tree)
    • TN - Deep-Structure Formation (tree-to-network)
    • NN - Semantic Processing (network-to-network)
  • GENERATION (UNL-NL)
    • NN - Semantic Processing (network-to-network)
    • NT - Deep-Structure Formation (network-to-tree)
    • TT - Syntactic Processing (tree-to-tree)
    • TL - Surface-Structure Formation (tree-to-list)
    • LL - List Processing (list-to-list)

The NL original sentence is supposed to be preprocessed, by the LL rules, in order to become an ordered list. Next, the resulting list structure is parsed with the LT rules, so as to unveil its surface syntactic structure, which is already a tree. The tree structure is further processed by the TT rules in order to expose its inner organization, the deep syntactic structure, which is supposed to be more suitable to the semantic interpretation. Then, this deep syntactic structure is projected into a semantic network by the TN rules. The resultant semantic network is then post-edited by the NN rules in order to comply with UNL standards and generate the UNL Graph.

The reverse process is carried out during natural language generation. The UNL graph is preprocessed by the NN rules in order to become a more easily tractable semantic network. The resulting network structure is converted, by the NT rules, into a syntactic structure, which is still distant from the surface structure, as it is directly derived from the semantic arrangement. This deep syntactic structure is subsequently transformed into a surface syntactic structure by the TT rules. The surface syntactic structure undergoes many other changes according to the TL rules, which generate a NL-like list structure. This list structure is finally realized as a natural language sentence by the LL rules.

As sentences are complex structures that may contain nested or embedded phrases, both the analysis and the generation processes may be interleaved rather than pipelined. This means that the natural flow described above is only "normal" and not "necessary". During natural language generation, a LL rule may apply prior to a TT rule, or a NN rule may be applied after a TL rule. Rules are recursive and must be applied in the order defined in the grammar as long as their conditions are true, regardless of the state.

Types of rules =

Main article: Grammar Specs

In the UNL framework there are two basic types of rules:

  • Transformation rules, or T-rules, are used to manipulate data structures, i.e., to transform lists into trees, trees into lists, trees into networks, networks into trees, etc. They follow the very general formalism
α:=β;

where the left side α is a condition statement, and the right side β is an action to be performed over α.

  • Disambiguation rules, or D-rules, are used to improve the performance of transformation rules by constraining or forcing their applicability. The Disambiguation Rules follows the formalism:
α=P;

where the left side α is a statement and the right side P is an integer from 0 to 255 that indicates the probability of occurrence of α.

Direction

In the UNLframework, we distinguish between analysis and generation grammars:

  • The UNL-NL T-G Grammar is used to generate natural language out of UNL
  • The NL-UNL (Analysis) Grammar is used to generate UNL out of natural language

Units

The process of UNLization may have different representation units, as follows:

  • Word-driven UNLization (the source document is represented as a single network of individual concepts)
  • Sentence-driven UNLization (the source document is represented as a list of non-semantically related networks of individual concepts)
  • Text-driven UNLization (the source document is represented as a network of semantically related networks of individual concepts)

In word-driven UNLization, the sentence boundaries and the structure of the source document are ignored, and the source document is represented as a single graph, i.e., as a simple network of individual concepts. In sentence-driven UNLization, the source document is analyzed, sentence by sentence, as a list of non-semantically related hyper-graphs. Each sentence is represented separately, and the only relation standing between sentences is the order in the source document. At last, text-driven UNLization targets the rhetorical structure of the source document, i.e., it analyzes the source document as a network of semantically related hyper-graphs. Word-driven UNLization is used mainly for information retrieval and extraction, whereas sentence- and text-driven UNLization are normally used for translation.

Paradigms

The process of UNLization may follow several different paradigms, as follows:

  • Language-based UNLization (based mainly in a NL-UNL dictionary and NL-UNL grammar)
  • Knowledge-based UNLization (based mainly in the UNL Knowledge Base)
  • Example-based UNLization (based mainly in the UNL Example Base)
  • Memory-based UNLization (based mainly in the UNLization Memory)
  • Statistical-based UNLization (based mainly in statistical predictions derived from UNL-NL corpora)
  • Dialogue-based UNLization (based mainly in the interaction with the user)

The actual UNLization is normally hybrid and may combine several of the strategies above.

Recall

The process of UNLization may target the whole source document or only parts of it (e.g. main clauses):

  • Full UNLization (the whole source document is UNLized)
  • Partial (or chunk) UNLization (only a part of the source document is UNLized)
Peter killed Mary with a knife yesterday morning.
Full UNLization: Peter killed Mary with a knife yesterday morning.
Partial UNLization: Peter killed Mary.

Precision

The process of UNLization may target the deep semantic structure of the source document (i.e., the resulting semantic structure replicates the syntactic structure of the original) or only its surface structure (the resulting semantic structure does not preserve the syntactic structure of the original)

  • Deep UNLization (the UNLization focus the deep semantic structure of the source document)
  • Shallow UNLization (the UNLization focus the surface semantic structure of the source document)

Syntactic structures are preserved in the UNL document by the use of syntactic attributes (such as @passive, @topic, etc) or by hyper-nodes (i.e., scopes). For some purposes, as translation, UNLization may require syntactic details; for others, such as information retrieval, syntactic structures at this level are not normally necessary:

Mary was killed by Peter
Shallow UNLization: Peter killed Mary
Deep UNLization: [Peter killed Mary].@passive
Mary saw Peter going to Paris.
Shallow UNLization: Mary saw Peter & Peter was going to Paris
Deep UNLization: Mary saw [Peter going to Paris].
As for the little girl, the dog licked her.
Shallow UNLization: the dog licked the little girl
Deep UNLization: the dog licked [the little girl].@topic

Level

The process of UNLization may target literal meanings (locutionary content) or non-literal meanings (ilocutionary content).

  • Locutionary (the UNLization represents only the literal meaning)
  • Ilocutionary (the UNLization represents also non-literal meanings, including speech acts)

The ilocutionary force may be represented by figure of speech and speech acts attributes:

It is as soft as concrete
Locutionary level: it is as soft as concrete
Ilocutionary level: [it is as soft as concrete].@irony
Can you pass me the salt?
Locutionary level: can you pass me the salt?
Ilocutionaruy level: [you pass me the salt].@request

Methods

Humans and machines may play different roles in UNLization methods:

  • Fully automatic UNLization (the whole process is carried out by the machine, without any intervention of the human user)
  • Human-aided machine UNLization (the process is carried mainly by the machine, with some intervention of the human user, either as a pre-editor or as a post-editor, or during the UNLization itself, as in dialogue-based UNLization)
  • Machine-aided human UNLization (the process is carried mainly by the human user, with some help of the machine, as in the dictionary or memory lookup)
  • Fully human UNLization (the whole process is carried by the human user, without any intervention of the machine)
Software