!!!Meeting setup
* Date: 02.04.2007
* Time: 09.30 Norw. time
* Place: Internet
* Tools: SubEthaEdit, iChat
!!!Agenda
# Opening, agenda review
# Reviewing the task list from last week
# Documentation - divvun.no
# Corpus gathering
# Corpus infrastructure
# Infrastructure
# Linguistics
# name lexicon infrastructure
# Spellers
# Other issues
# Summary, task lists
# Closing
!!!1. Opening, agenda review, participants
Opened at 09:36.
Present: __Saara, Sjur, Steinar, Tomi, Trond__
Absent: __Børre, Maaren, Thomas__
Agenda accepted as is.
!!!2. Updated task status since last meeting
!! Børre
* update and fix our documentation and infrastructure as __Steinar__ finds
problem areas - low priority
* find missing {{nob}} parallel texts in corpus
* add prefixes to the PLX conversion
** done some
* improve number PLX conversion
* add {{sma}} texts to the corpus repository
* improve automatic alignment process
* add potential speller test texts
** done
* collect a list of PR recipients, forward to Berit Karen Paulsen
* add version info to the generated speller lexicons
** __Sjur__ did it
* run all known spelling errors in the corpus through the speller
* test the {{typos.txt}} list, and check that all entries are properly corrected
* consider how to do a regression __self-test__
* add extraction of all known spelling errors in the regular corpus (not the
{{prooftest}} corpus), and check that they are properly corrected
* [fix bugs!|http://giellatekno.uit.no/bugzilla]
!! Maaren
* lexicalise actio compounds
* Manually mark speller test documents for typos
!! Saara
* prepare more files for manual alignment
** discussed with Trond
* mark-up the added speller test texts, using our existing xml format
** improved markup almost ready.
* improve cgi-bin scripts
** add new features to the paradigm generator
*** not done
* add new XSL/XML headers for proofing test docs
** not done
* continue with speller test data
** done as specified this far. next the xml-format? Yes, please:)
* compilation of verb lists
** not done, will do this week.
* speed in smj conversion
** not done, is this still a problem?
* [fix bugs!|http://giellatekno.uit.no/bugzilla]
* other
** some reformatting of prooftest-MinAigi filenames
!! Sjur
* hire linguist
** done! (some bureaucracy left)
* finish press release for the beta
** not done
* collect a list of PR recipients
** not done
* add version info to the generated speller lexicons
** done - could still need some improvement
* run all known spelling errors in the corpus through the speller
** we're almost able to do that:)
* consider how to do a regression __self-test__
** not done
* document the AppleScript testing tool
** not finished
* write tools for statistical analysis of test results
** based on work by __Saara__
* make improved {{smj}} speller (incl. derivations and compounds)
** worked on it a lot last week, but we encountered several hurdles with the
PLX conversion
* [fix bugs!|http://giellatekno.uit.no/bugzilla]
!! Steinar
* Beta testing: Align manually (shorter texts)
* Manually mark speller test texts for typos (making them into gold standards),
add the texts to the orig/catalogue
** some texts finished, but they are not added to the orig/catalogue yet
* Complete the semantic sets in sme-dis.rle
** no work this week
* missing lists
** added missing literature terms
* Look at the actio compound issue when adding from missing lists
** not done
* Align corpus manually
* [fix bugs!|http://giellatekno.uit.no/bugzilla]
!! Thomas
* work with compounding
** continued
* Lack of lowering before hyphen: Twol rewrite.
* translate beta release docs to {{sme}} and {{smj}}
* Add potential speller test texts
* [fix bugs!|http://giellatekno.uit.no/bugzilla]
!! Tomi
* make improved {{smj}} speller (incl. derivations and compounds)
** not finished
* make PLX conversion test sample; add conversion testing to the make file
* improve number PLX conversion
* update {{ccat}} to handle error/correction markup
** updated
* add version info to the generated speller lexicons
** Sjur did
* [fix bugs!|http://giellatekno.uit.no/bugzilla]
* other
** Added nouns and adjectives to xfst based PLX conversion
!! Trond
* Test the beta versions
** Done testing
* Work on the parallel corpus issues
** Intensive work on the anchor list.
** Discuss with Anders
*** Mailed Lars
** Work on the aligner with (__Børre__)
***Done, still more to do.
** fix {{sme}} texts in corpus this month
*** Worked on it.
** find missing {{nob}} parallel texts in corpus, go through Saara's list
*** Not done.
* Postpone these tasks to after the beta:
** update the {{smj}} proper noun lexicon, and refine the morphological
analysis, cf. the propernoun-smj-lex.txt
** Go through the Num bugs
* Improve automatic alignment process
* Align corpus manually
* Add potential speller test texts
* collect a list of PR recipients
* [fix bugs!|http://giellatekno.uit.no/bugzilla].
!!!3. Documentation
The open documentation issues fall into these three categories:
* Beta documentation for testers
* Documentation for the online corpora
* General documentation improvement after Steinar's test (for open-source
release)
TODO:
* write form to request corpus user account (__Børre, Sjur, Trond__)
** delayed till after the beta release
* document how to apply for access to closed corpus, and details on the corpus
and its use in general (__Børre, Sjur, Trond__)
** delayed till after the beta release
* correct and improve it based on feedback from __Steinar__ (__Børre__)
** low priority
* beta documentation (see separate beta section below)
!!!4. Corpus gathering
TODO:
* {{sme}} texts: no new additions, fix corpus errors during this month
(__Børre, Trond, Saara__)
* missing {{nob}} parallel texts should be added if such holes are found
(__Børre, Trond__)
* Go through the list of missing or errouneous {{nob}} texts, based upon
__Saara's__ perfect list (__Børre, Trond__)
* add {{sma}} texts to the corpus repository (__Børre__)
!!!5. Corpus infrastructure
!!Alignment
Trond has worked a lot with improving the anchor list, it was fruitful to do it.
The anchor list still should be improved, though. It might be useful to run the
alignment again.
There is a bug in the web interface, leading to wrong alignment: one sentence
off in some but not all cases. Trond has discussed this with Lars.
__TODO__
* go through other directories (nob dicrectories, sd directories), fix
parallellity information for other documents (2 hours)
(__Børre__)
* Improve the automatic process:
** Improve the anchor list and realign (__Trond, Børre__)
** Only adding words does not improve alignment, you have to consider the format
as well. If you cut with star e.g guo* too early, wrong word can be selected.
** The documents have still some formatting issues which cause trouble in
alignment. (In some documents tables are included to the text, some not,
etc.)
** Test and improve settings in the aligner
* Align manually (__Trond, Steinar__) (especially shorter terminological texts)
good idea. __Saara__ will look for troublesome texts and prepare them for
manual alignment.
!!!6. Infrastructure
__TODO:__
* update and fix our documentation and infrastructure as __Steinar__ finds
problem areas (__Børre__)
** started, working on it
!!!7. Linguistics
!!North Sámi
TODO:
* lexicalise actio compounds. Example: ''vuolggasadji'' vs. ''vuolginsadji''
(__Maaren__)
* fix stuorra-oslolaš lower case {{o}} (__Sjur, Thomas, Trond__)
** postponed till after the public beta
!!Lule Sámi
TODO:
* refine {{smj}} proper noun lexica, cf. the propernoun-smj-lex.txt
(__Thomas, Trond__)
!!!8. Name lexicon infrastructure
Decisions made in Tromsø can be found in [this meeting
memo.|/doc/admin/physical_meetings/tromso-2006-08-propnoun.html]
__TODO:__
# fix bugs in lexc2xml; add comments to the log element (__Saara__)
# finish first version of the editing (__Sjur__)
# test editing of the xml files. If ok, then: (__Sjur, Thomas, Trond__)
# make terms-smX.xml <=== automatically from propernoun-sme-lex.xml (add nob as
well) (the morphological section should be kept intact, in e.g.
propernoun-sme-morph.txt) (__Sjur, Saara__)
# convert propernoun-($lang)-lex.txt to a derived file from common xml files
(__Sjur, Tomi, Saara__)
# implement data synchronisation between [risten.no|http://www.risten.no] and
the cvs repo, and possibly other servers (ie the G5 as an alternative server
to the public risten.no - it might be faster and better suited than the
official one; also local installations could be treated the same way)
# start to use the xml file as source file
# clean terms-sme.xml such that all names have the correct tag for their use
(e.g. @type=secondary) (__Thomas, Maaren, linguists__)
# merge placenames which are errouneously in different entries: e.g. Helsinki,
Helsingfors, Helsset (__linguists__)
# publish the name lexicon on risten.no (__Sjur__)
# add missing parallel names for placenames (__linguists__)
# add informative links between first names like Niillas and Nils
(__linguists__)
!!!9. Spellers
!!OOo speller(s)
TODO after the MS Office Beta is delivered:
* add Hunspell data generation to the lexc2xspell (__Tomi__ - after the
PLX data generation is finished)
* study the Hunspell formalism in detail (__Børre, Sjur, Tomi__)
!!Testing
!Selecting test texts
In principle, we need the same text types as the ones we already aim at in our
corpus. In practice, we must use new, unseen texts. We would like to have a
balanced input of texts, but right now:
* Min Áigi net issue
* Blogs
* Our own linguistic texts
* New department texts from the net
!Spelling Error Markup
Procedure for marking up:
# pick a file in:
{{/home/steinarn/gt/sme/zcorp/prooftest/bound/sme/news/MinAigi/}}, e.g.:
{{index2.php_id_647.html.xml}}
# rename it from {{.xml}} to {{.correct.xml}}:
{{index2.php_id_647.html.correct.xml}}
# copy to your own computer
# open in SEE or XMLEditor
# add manual markup according to the established convention
# when done, copy the file back to victorio - see dir structure below
Directory structure and file locations for manually corrected files:
{{{
1 prooftest/.../orig/file.html loading
1 prooftest/.../orig/file.html.xsl converting
2 prooftest/.../bound/file.html.xml to this file, copying back to orig as 3a
3a prooftest/.../orig/file.html.correct.xml speling§spelling working on this
manually, using RCS to check in each generation of manual markup
3b prooftest/.../bound/file.html.xml speling
}}}
- missing: last changes in eror$error => conversion, + last ccat option
__TODO:__
* Manually mark test texts for typos (making them into gold standards)
(__Steinar__)
** {{erorr§errors}}
** when correcting to multiple strings: {{erorr.Og§(error. Og)}}
*** update correction markup xml conversion to handle the second case
(__Saara__)
* change {{ccat}} to handle error/correction markup (__Tomi__)
** extract the document text with original errors (input to standard speller
test) - default? yes (= the old behaviour)
*** done
** extract the document text with the available corrections (the correct docu),
to be used to automatically evaluate testing tool output; could also be used
to give the corpus tagger an easier job:-)
*** done
** extract all and only the spelling errors with their corrections, in a tab
separated file: {{erorrcorrection}} - just as typos.txt. Used for
regression testing (making sure we don't take steps back)
*** done
** extract the whole text, both the original text and their corrections, in a
tab separated file: {{erorrcorrection}} - just as typos.txt. Used for
another type of speller testing. {{correction}} should be an emtpy string if
there is no correction.
*** not done
* Set up ways of adding meta-information (source info, used in testing or not,
added to lexicon or not)
** add the wanted xml elements to the XSL header (__Saara__) (source info is
the same as for regular docs):
*** outworned (ie not suitable for speller testing any more, only for regression
testing)
*** lexicalised (all unknown, correctly spelled words added to lexicon)
* Conduct tests on new beta versions on the basis of the unspoiled gold standard
documents (__whoever has time__), and fill in data from testing (the testers:
__who?__)
* alternatively: make test scripts that will run the tests automatically,
collect the numbers, and transform them into test results (__Sjur__)
* include the ones already tested in the {{testing/}} catalogue
* test each version before beta release
!Testing tools
__TODO:__
* document the AppleScript testing tool (__Sjur__)
* write tools for statistical analysis of test results (__Sjur__)
** started, __Saara__ continued, will continue with Forrest integration
** Saara found a bug in the Polderland speller tool, now reported to them
!Regression tests
__TODO:__
* add extraction of all known spelling errors in the corpus (not the
{{prooftest}} corpus), and check that they are properly corrected
(__Børre, Sjur__)
** {{ccat}} now ready, it should be integrated in the Makefile (__Sjur, Tomi__)
* test the {{typos.txt}} list, and check that all entries are properly corrected
(__Børre, Sjur__)
* consider how to do a regression __self-test__, ie, how to test the full
wordlist (__Børre, Sjur__)
** extract all the base forms in the lexicon, and run them through the speller
** extract all SUB-marked entries, and run them through the lexicon
*** integrate these in the make file (__Sjur__)
!!Localisation
We need to translate the info added to our front page (and a separate page)
regarding the beta release. Also the press release needs to be translated.
TODO:
* translate beta release docs to {{sme}} (__Thomas__)
* translate beta release docs to {{smj}} (__Thomas__)
!!Lexicon conversion to the PLX format
Postverbal clitics
How to include compounding restriction comment tags in the transducers:
{{{
giv0ri:giv'ri ALBMI ; !+SgNomCmp +SgGenCmp +PlGenCmp
=> (using a perl script or similar)
+SgNomCmp+SgGenCmp+PlGenCmpgiv0ri:giv'ri ALBMI ; !
}}}
__TODO:__
# improve prefix conversion to PLX (__Tomi__)
# improve middle noun conversion to PLX (__Tomi__)
# improve noun + adjective PLX conversion: (__Tomi__)
## compounding stems - how do we generate them? Using the java client?
{{+SgNomCmp+Cmpnd}} = {{sáme–}}, should give the correct compounding stem,
shouldn't it? We want to __optionally__ go from: {{sáme- NLI}} to
{{sáme NL}}: {{- NLI (->) NL}}
## compounding tags - we need to obey them when making the transducers.
Suggestion - see above.
# add propernouns to xfst-based conversion
# make conversion test sample; add conversion testing to the make file
(__Tomi__)
# improve number conversion (__Børre, Tomi__)
# run xfst-based PLX conversion on victorio, make the result available on our
public server (__Saara, Sjur__)
!!Public Beta release
Due to the problems with generating the PLX files discussed above, we need to
move the release date further. Mid-April, before the "physical" meeting.
__DONE:__
* delivered PLX data of {{sme}} and {{smj}} including compounding
* translated Windows installer to {{sme}} and {{smj}}
* installed PLX compiler in G5 at {{/usr/local/bin/mklex*}} (one version for
{{sme}} and one for {{smj}})
* added resources needed for compiling PLX lexicons to our cvs repo
* tested the beta drop from Polderland - good we did, it is absolutely
unacceptable (our responsibiliby - only linguistic errors (poor coverage)
found so far)
* add compilation of MS Office spellers part of the Makefile
* install Windows and MS Office; test tools on Windows
__TODO:__
* improved {{smj}} speller (incl. derivations and compounds) (__Sjur, Tomi__)
* finish press release (__Sjur__)
* add info to front page (incl. download links) (__Børre__)
* write separate page with detailed info (incl. download links) (__Børre__)
* translate press release, web pages (__Børre, Thomas, whoever__)
* collect a list of PR recipients, forward to Berit Karen Paulsen
(__Børre, Sjur, Trond__)
!!Version identification of speller lexicons
__TODO:__
* add version info to the generated speller lexicons
** done - still needs to be improved by automatically add the date of the
compilation (__Sjur__)
!!!10. Other
!!Project meeting IRL
The planned gathering will have to be on 16.-20.4., in __Kárásjohki__ or
__Guovdageaidnu.__ All of Divvun should participate, and some from the UiTø
project as well.
!!Corpus contracts
TODO:
* publish corpus contracts and project infra as open-source on NoDaLi-sta
(__Sjur__)
** __delayed__ until the public beta is out the door
!!Bug fixing
__51__ open Divvun/Disamb bugs, and __23__ risten.no bugs
!!Easter
* Børre: away
* Maaren: away
* Saara: working
* Sjur: working
* Thomas: away 2/4 - 4/4
* Tomi: At work, in Karigasniemi
* Trond: working
!!!11. Next meeting, closing
The next meeting is 2.4.2007, 09:30 Norwegian time.
The meeting was closed at 12:40.
!!!Appendix - task lists for the next week
!! Boerre
[iCal|/doc/admin/weekly/2007/Tasks_2007-04-02_Boerre.ics]
* update and fix our documentation and infrastructure as __Steinar__ finds
problem areas - low priority
* find missing {{nob}} parallel texts in corpus
* improve number PLX conversion
* add {{sma}} texts to the corpus repository
* improve automatic alignment process
* collect a list of PR recipients, forward to Berit Karen Paulsen
* run all known spelling errors in the corpus through the speller
* add extraction of all known spelling errors in the regular corpus (not the
{{prooftest}} corpus), and check that they are properly corrected
* [fix bugs!|http://giellatekno.uit.no/bugzilla]
!! Maaren
[iCal|/doc/admin/weekly/2007/Tasks_2007-04-02_Maaren.ics]
* lexicalise actio compounds
* Manually mark speller test documents for typos
!! Saara
[iCal|/doc/admin/weekly/2007/Tasks_2007-04-02_Saara.ics]
* prepare more files for manual alignment
* mark-up the added speller test texts, using our existing xml format
* improve cgi-bin scripts
** add new features to the paradigm generator
* add new XSL/XML headers for proofing test docs
* continue with speller test data
* compilation of verb lists
* speed in smj conversion
* [fix bugs!|http://giellatekno.uit.no/bugzilla]
!! Sjur
[iCal|/doc/admin/weekly/2007/Tasks_2007-04-02_Sjur.ics]
* finish press release for the beta
* collect a list of PR recipients
* improve version info in the speller lexicons
* run all known spelling errors in the corpus through the speller
* document the AppleScript testing tool
* write tools for statistical analysis of test results
* integrate regression self tests with the make file
* make improved {{smj}} speller (incl. derivations and compounds)
* [fix bugs!|http://giellatekno.uit.no/bugzilla]
!! Steinar
[iCal|/doc/admin/weekly/2007/Tasks_2007-04-02_Steinar.ics]
* Beta testing: Align manually (shorter texts)
* Manually mark speller test texts for typos (making them into gold standards),
add the texts to the orig/catalogue
* Complete the semantic sets in sme-dis.rle
* missing lists
* Look at the actio compound issue when adding from missing lists
* Align corpus manually
* [fix bugs!|http://giellatekno.uit.no/bugzilla]
!! Thomas
[iCal|/doc/admin/weekly/2007/Tasks_2007-04-02_Thomas.ics]
* work with compounding
* Lack of lowering before hyphen: Twol rewrite.
* translate beta release docs to {{sme}} and {{smj}}
* Add potential speller test texts
* [fix bugs!|http://giellatekno.uit.no/bugzilla]
!! Tomi
[iCal|/doc/admin/weekly/2007/Tasks_2007-04-02_Tomi.ics]
* make improved {{smj}} speller (incl. derivations and compounds)
* make PLX conversion test sample; add conversion testing to the make file
* improve number PLX conversion
* improve prefix and middle-noun PLX conversion
* [fix bugs!|http://giellatekno.uit.no/bugzilla]
!! Trond
[iCal|/doc/admin/weekly/2007/Tasks_2007-04-02_Trond.ics]
* Test the beta versions
* Work on the parallel corpus issues
** Discuss with Anders
** Work on the aligner with (__Børre__)
** fix {{sme}} texts in corpus this month
** find missing {{nob}} parallel texts in corpus, go through Saara's list
* Postpone these tasks to after the beta:
** update the {{smj}} proper noun lexicon, and refine the morphological
analysis, cf. the propernoun-smj-lex.txt
** Go through the Num bugs
* Improve automatic alignment process
* Align corpus manually
* Add potential speller test texts
* collect a list of PR recipients
* [fix bugs!|http://giellatekno.uit.no/bugzilla].