NL Reference Corpus
From UNL Wiki
(Difference between revisions)
(→Files) |
(→Methodology) |
||
Line 10: | Line 10: | ||
As a natural language corpus, the NC varies for each language. It is derived from a base corpus to be compiled and processed according to the following criteria: | As a natural language corpus, the NC varies for each language. It is derived from a base corpus to be compiled and processed according to the following criteria: | ||
#The '''Base Corpus''' must have at least 5,000,000 tokens (strings isolated by blank space and other word boundary markers). It must be representative of the contemporary standard use of the written language, and should include documents from as many different genres and domains as possible. | #The '''Base Corpus''' must have at least 5,000,000 tokens (strings isolated by blank space and other word boundary markers). It must be representative of the contemporary standard use of the written language, and should include documents from as many different genres and domains as possible. | ||
− | #The Base Corpus must be '''segmented''' (in sentences) | + | #The Base Corpus must be '''segmented''' (in sentences). |
− | #The | + | #The Segmented Corpus must be '''tokenized''' (according to the natural language dictionary exported from the UNLarium). |
− | + | #The Tokenized Corpus must be '''annotated''' for lexical category, in order to generate the [[LSS|linear sentence structures]] (LSS). | |
− | #The | + | #The Annotated Corpus must be subdivided into 6 different subsets, according to the number of tokens: |
− | #* | + | ##*A1 = length <= 15th percentile (very small sentences) |
− | #* | + | ##*A2 = 15th percentile < length <= 30th percentile (small sentences) |
− | #* | + | ##*B1 = 30th percentile < length <= 45th percentile (small medium-size sentences) |
− | #* | + | ##*B2 = 45th percentile < length <= 60th percentile (long medium-size sentences) |
− | #* | + | ##*C1 = 60th percentile < length <= 80th percentile (long sentences) |
− | #* | + | ##*C2 = length > 80th percentile (very long sentences) |
− | # | + | #Each subcorpus is used to compile the NC corpus: the training corpora and the testing corpora. They refer to the 1,000 most frequent LSS. |
+ | #:The training corpora consists of 1 exemplar of the each LSS, and will be used to prepare the grammar. (1,000 sentences in total) | ||
+ | #:The testing corpora consists of 4 exemplars of each LSS randomly selected in the Annotated Corpus. (4,000 sentences in total) | ||
+ | #The whole NC corpus (i.e., 5 exemplars for each LSS) is used to calculate the [[F-measure]], which is the parameter for assessing the precision and the recall of the grammars. | ||
== Files == | == Files == |
Revision as of 17:33, 17 April 2014
The NL Reference Corpus (NC) is the corpus used to prepare and to assess grammars for sentence-based UNLization. It is divided in 6 different levels according to the Framework of Reference for UNL (FoR-UNL):
- NC-A1: NL Reference Corpus A1
- NC-A2: NL Reference Corpus A2
- NC-B1: NL Reference Corpus B1
- NC-B2: NL Reference Corpus B2
- NC-C1: NL Reference Corpus C1
- NC-C2: NL Reference Corpus C2
Methodology
As a natural language corpus, the NC varies for each language. It is derived from a base corpus to be compiled and processed according to the following criteria:
- The Base Corpus must have at least 5,000,000 tokens (strings isolated by blank space and other word boundary markers). It must be representative of the contemporary standard use of the written language, and should include documents from as many different genres and domains as possible.
- The Base Corpus must be segmented (in sentences).
- The Segmented Corpus must be tokenized (according to the natural language dictionary exported from the UNLarium).
- The Tokenized Corpus must be annotated for lexical category, in order to generate the linear sentence structures (LSS).
- The Annotated Corpus must be subdivided into 6 different subsets, according to the number of tokens:
- A1 = length <= 15th percentile (very small sentences)
- A2 = 15th percentile < length <= 30th percentile (small sentences)
- B1 = 30th percentile < length <= 45th percentile (small medium-size sentences)
- B2 = 45th percentile < length <= 60th percentile (long medium-size sentences)
- C1 = 60th percentile < length <= 80th percentile (long sentences)
- C2 = length > 80th percentile (very long sentences)
- Each subcorpus is used to compile the NC corpus: the training corpora and the testing corpora. They refer to the 1,000 most frequent LSS.
- The training corpora consists of 1 exemplar of the each LSS, and will be used to prepare the grammar. (1,000 sentences in total)
- The testing corpora consists of 4 exemplars of each LSS randomly selected in the Annotated Corpus. (4,000 sentences in total)
- The whole NC corpus (i.e., 5 exemplars for each LSS) is used to calculate the F-measure, which is the parameter for assessing the precision and the recall of the grammars.
Files
- Arabic
- Source: Wikipedia
- Total number of distinct sentences: 801,258
- Corpus
- Corpus NC_ara_A1 (length <= 9: 141,988 sentences)
- Corpus NC_ara_A2 (9 < length <= 13: 150,406 sentences)
- Corpus NC_ara_B1 (13 < length <= 17: 146,178 sentences)
- Corpus NC_ara_B2 (17 < length <= 22: 141,376 sentences)
- Corpus NC_ara_C1 (22 < length <= 32: 165,455 sentences)
- Corpus NC_ara_C2 (length > 32: 165,616 sentences)