Grammar

From UNL Wiki
(Difference between revisions)
Jump to: navigation, search
Line 26: Line 26:
 
As sentences are complex structures that may contain nested or embedded phrases, both the analysis and the generation processes may be '''interleaved''' rather than pipelined. This means that the natural flow described above is only "normal" and not "necessary". During natural language generation, a LL rule may apply prior to a TT rule, or a NN rule may be applied after a TL rule. Rules are recursive and must be applied in the order defined in the grammar as long as their conditions are true, regardless of the state.
 
As sentences are complex structures that may contain nested or embedded phrases, both the analysis and the generation processes may be '''interleaved''' rather than pipelined. This means that the natural flow described above is only "normal" and not "necessary". During natural language generation, a LL rule may apply prior to a TT rule, or a NN rule may be applied after a TL rule. Rules are recursive and must be applied in the order defined in the grammar as long as their conditions are true, regardless of the state.
  
= Types of rules ==
+
== Types of rules ==
 
''Main article: [[Grammar Specs]]''
 
''Main article: [[Grammar Specs]]''
  
Line 38: Line 38:
  
 
== Direction ==
 
== Direction ==
In the UNL<sup>framework</sup>, we distinguish between analysis and generation grammars:
+
In the UNL framework, grammars are not bidirectional, although they share the same syntax:
*The '''UNL-NL T-G Grammar''' is used to generate natural language out of UNL
+
*UNL-NL T-Grammar: used for natural language generation
*The '''NL-UNL (Analysis) Grammar''' is used to generate UNL out of natural language
+
*UNL-NL D-Grammar: used for improving the results of the UNL-NL T-Grammar
 +
*NL-UNL T-Grammar: used for natural language analysis
 +
*NL-UNL D-Grammar: used for tokenization and for improving the results of the NL-UNL T-Grammar
  
 
== Units ==
 
== Units ==
The process of UNLization may have different representation units, as follows:
+
In the UNL framework, grammars may have different processing units:
*Word-driven UNLization (the source document is represented as a single network of individual concepts)
+
*Text-driven grammars operate over texts and process the source document as a single unit
*Sentence-driven UNLization (the source document is represented as a list of non-semantically related networks of individual concepts)
+
*Sentence-driven grammars operate over sentences and process each sentence or graph separately
*Text-driven UNLization (the source document is represented as a network of semantically related networks of individual concepts)
+
*Word-driven grammars operate over words and process each word or node separately
In word-driven UNLization, the sentence boundaries and the structure of the source document are ignored, and the source document is represented as a single graph, i.e., as a simple network of individual concepts. In sentence-driven UNLization, the source document is analyzed, sentence by sentence, as a list of non-semantically related hyper-graphs. Each sentence is represented separately, and the only relation standing between sentences is the order in the source document. At last, text-driven UNLization targets the rhetorical structure of the source document, i.e., it analyzes the source document as a network of semantically related hyper-graphs. Word-driven UNLization is used mainly for information retrieval and extraction, whereas sentence- and text-driven UNLization are normally used for translation.
+
Text-driven grammars are normally used in summarization and simplification, when the rhetorical structure of the source document is important. Sentence-driven grammars are used mostly in translation, when the source document can be treated as a list of non-semantically related units, to be processed one at a time. Word-driven grammars are used in information retrieval and opinion mining, when each word or node can be treated in isolation. <br />
 
+
All these grammars share the same type of rule.  
== Paradigms ==
+
The process of UNLization may follow several different paradigms, as follows:
+
*Language-based UNLization (based mainly in a [[UNL Dictionary|NL-UNL dictionary]] and [[Grammar Specs|NL-UNL grammar]])
+
*Knowledge-based UNLization (based mainly in the [[UNL Knowledge Base]])
+
*Example-based UNLization (based mainly in the [[UNL Example Base]])
+
*Memory-based UNLization (based mainly in the [[UM Specs|UNLization Memory]])
+
*Statistical-based UNLization (based mainly in statistical predictions derived from UNL-NL corpora)
+
*Dialogue-based UNLization (based mainly in the interaction with the user)
+
The actual UNLization is normally hybrid and may combine several of the strategies above.
+
  
 
== Recall ==  
 
== Recall ==  
The process of UNLization may target the whole source document or only parts of it (e.g. main clauses):
+
Grammars may target the whole source document or only parts of it (e.g. main clauses):
*Full UNLization (the whole source document is UNLized)
+
*Chunk grammars target only a part of the source document
*Partial (or chunk) UNLization (only a part of the source document is UNLized)
+
*Full grammars target the whole source document
;Peter killed Mary with a knife yesterday morning.
+
:Full UNLization: Peter killed Mary with a knife yesterday morning.
+
:Partial UNLization: Peter killed Mary.
+
  
 
== Precision ==
 
== Precision ==
The process of UNLization may target the deep semantic structure of the source document (i.e., the resulting semantic structure replicates the syntactic structure of the original) or only its surface structure (the resulting semantic structure does not preserve the syntactic structure of the original)
+
Grammars may target the deep or the surface structure of the source document:
*Deep UNLization (the UNLization focus the deep semantic structure of the source document)
+
*Deep grammars focus on the deep dependency relations of the source document
*Shallow UNLization (the UNLization focus the surface semantic structure of the source document)
+
*Shallow grammars focus only on the surface dependency relations of the source document
Syntactic structures are preserved in the UNL document by the use of syntactic attributes (such as @passive, @topic, etc) or by hyper-nodes (i.e., [[scope]]s). For some purposes, as translation, UNLization may require syntactic details; for others, such as information retrieval, syntactic structures at this level are not normally necessary:
+
;Mary was killed by Peter
+
:Shallow UNLization: Peter killed Mary
+
:Deep UNLization: [Peter killed Mary].@passive
+
;Mary saw Peter going to Paris.
+
:Shallow UNLization: Mary saw Peter & Peter was going to Paris
+
:Deep UNLization: Mary saw [Peter going to Paris].
+
;As for the little girl, the dog licked her.
+
:Shallow UNLization: the dog licked the little girl
+
:Deep UNLization: the dog licked [the little girl].@topic
+
 
+
== Level ==
+
The process of UNLization may target literal meanings (locutionary content) or non-literal meanings (ilocutionary content).
+
*Locutionary (the UNLization represents only the literal meaning)
+
*Ilocutionary (the UNLization represents also non-literal meanings, including speech acts)
+
The ilocutionary force may be represented by figure of speech and speech acts attributes:
+
;It is as soft as concrete
+
:Locutionary level: it is as soft as concrete
+
:Ilocutionary level: [it is as soft as concrete].@irony
+
;Can you pass me the salt?
+
:Locutionary level: can you pass me the salt?
+
:Ilocutionaruy level: [you pass me the salt].@request
+
 
+
== Methods ==
+
Humans and machines may play different roles in UNLization methods:
+
*Fully automatic UNLization (the whole process is carried out by the machine, without any intervention of the human user)
+
*Human-aided machine UNLization (the process is carried mainly by the machine, with some intervention of the human user, either as a pre-editor or as a post-editor, or during the UNLization itself, as in dialogue-based UNLization)
+
*Machine-aided human UNLization (the process is carried mainly by the human user, with some help of the machine, as in the dictionary or memory lookup)
+
*Fully human UNLization (the whole process is carried by the human user, without any intervention of the machine)
+

Revision as of 21:10, 21 September 2012

In the UNL framework, a grammar is a set of rules that are used to generate UNL out of natural language, and natural language out of UNL. Along with the UNL-NL dictionaries, they constitute the basic resource for UNLization and NLization.

Contents

Networks, Trees and Lists

Natural language sentences and UNL graphs are supposed to convey the same amount of information in different structures: whereas the former arranges data as an ordered list of words, the latter organizes it as a network. In that sense, going from natural language into UNL and from UNL into natural language is ultimately a matter of transforming lists into networks and vice-versa.

The UNL framework assumes that such transformation can be carried out progressively, i.e., through a transitional data structure: the tree, which could be used as an interface between lists and networks. Accordingly, there are seven different types of rules (LL, TT, NN, LT, TL, TN, NT), as indicated below:

  • ANALYSIS (NL-UNL)
    • LL - List Processing (list-to-list)
    • LT - Surface-Structure Formation (list-to-tree)
    • TT - Syntactic Processing (tree-to-tree)
    • TN - Deep-Structure Formation (tree-to-network)
    • NN - Semantic Processing (network-to-network)
  • GENERATION (UNL-NL)
    • NN - Semantic Processing (network-to-network)
    • NT - Deep-Structure Formation (network-to-tree)
    • TT - Syntactic Processing (tree-to-tree)
    • TL - Surface-Structure Formation (tree-to-list)
    • LL - List Processing (list-to-list)

The NL original sentence is supposed to be preprocessed, by the LL rules, in order to become an ordered list. Next, the resulting list structure is parsed with the LT rules, so as to unveil its surface syntactic structure, which is already a tree. The tree structure is further processed by the TT rules in order to expose its inner organization, the deep syntactic structure, which is supposed to be more suitable to the semantic interpretation. Then, this deep syntactic structure is projected into a semantic network by the TN rules. The resultant semantic network is then post-edited by the NN rules in order to comply with UNL standards and generate the UNL Graph.

The reverse process is carried out during natural language generation. The UNL graph is preprocessed by the NN rules in order to become a more easily tractable semantic network. The resulting network structure is converted, by the NT rules, into a syntactic structure, which is still distant from the surface structure, as it is directly derived from the semantic arrangement. This deep syntactic structure is subsequently transformed into a surface syntactic structure by the TT rules. The surface syntactic structure undergoes many other changes according to the TL rules, which generate a NL-like list structure. This list structure is finally realized as a natural language sentence by the LL rules.

As sentences are complex structures that may contain nested or embedded phrases, both the analysis and the generation processes may be interleaved rather than pipelined. This means that the natural flow described above is only "normal" and not "necessary". During natural language generation, a LL rule may apply prior to a TT rule, or a NN rule may be applied after a TL rule. Rules are recursive and must be applied in the order defined in the grammar as long as their conditions are true, regardless of the state.

Types of rules

Main article: Grammar Specs

In the UNL framework there are two basic types of rules:

  • Transformation rules, or T-rules, are used to manipulate data structures, i.e., to transform lists into trees, trees into lists, trees into networks, networks into trees, etc. They follow the very general formalism
α:=β;

where the left side α is a condition statement, and the right side β is an action to be performed over α.

  • Disambiguation rules, or D-rules, are used to improve the performance of transformation rules by constraining or forcing their applicability. The Disambiguation Rules follows the formalism:
α=P;

where the left side α is a statement and the right side P is an integer from 0 to 255 that indicates the probability of occurrence of α.

Direction

In the UNL framework, grammars are not bidirectional, although they share the same syntax:

  • UNL-NL T-Grammar: used for natural language generation
  • UNL-NL D-Grammar: used for improving the results of the UNL-NL T-Grammar
  • NL-UNL T-Grammar: used for natural language analysis
  • NL-UNL D-Grammar: used for tokenization and for improving the results of the NL-UNL T-Grammar

Units

In the UNL framework, grammars may have different processing units:

  • Text-driven grammars operate over texts and process the source document as a single unit
  • Sentence-driven grammars operate over sentences and process each sentence or graph separately
  • Word-driven grammars operate over words and process each word or node separately

Text-driven grammars are normally used in summarization and simplification, when the rhetorical structure of the source document is important. Sentence-driven grammars are used mostly in translation, when the source document can be treated as a list of non-semantically related units, to be processed one at a time. Word-driven grammars are used in information retrieval and opinion mining, when each word or node can be treated in isolation.
All these grammars share the same type of rule.

Recall

Grammars may target the whole source document or only parts of it (e.g. main clauses):

  • Chunk grammars target only a part of the source document
  • Full grammars target the whole source document

Precision

Grammars may target the deep or the surface structure of the source document:

  • Deep grammars focus on the deep dependency relations of the source document
  • Shallow grammars focus only on the surface dependency relations of the source document
Software