I’d like to automate the process of analyzing grammar coverage and ambiguity for my inferred grammars. Before I do this myself, I want to know if pydelphin or other delphin software already have this functionality? If so, can someone point me to the documentation? Basically, after creating a profile with either pydelphin or art, I’d like to know what is the coverage, ambiguity and lexical coverage.
You won’t be able to compute lexical coverage from a profile of sentences. The only way I know to do it is to instead feed the words to the parser one by one, as if each was its own sentence, and count the lexical gap errors. If you try to do it on a sentence level profile, I believe you will be limited to detecting one lexical gap per sentence.
That’s what I meant-- I might be using the wrong terminology. I want the percentage of sentences for which every word was analyzed even if the sentence didn’t parse.
Regarding coverage and ambiguity, that’s one thing gTest could do, but the project is deprecated with the new version of PyDelphin and I don’t have replacement functionality for everything.
PyDelphin can help you with basic coverage analysis with the
select command. Note that you probably want to separately count those marked as grammatical (
i-wf==1) from ungrammatical (
$ delphin mkprof -s path/to/skeleton path/to/profile # prepare profile $ delphin process -g path/to/grm.dat path/to/profile # parse it $ delphin select "readings where i-wf==1" path/to/profile # grammatical $ delphin select "readings where i-wf==0" path/to/profile # ungrammatical
select commands will print a list of numbers which are the number of readings for each input marked as (un)grammatical. You can then look for zeros when
i-wf==1 to find parse failures and non-zeros when
i-wf==0 for overgeneration. You can also count them to get average ambiguity, etc. To get sentences with lexical gaps, you can search the
error string for ACE’s error message. E.g.:
$ delphin select 'i-id where error ~ "lexical gap"'
You may want to wrap up this functionality in a script. Speaking of which, this is essentially how I replicated gTest’s regression test functionality for the Matrix. The basic workflow is this:
$ delphin mkprof -s path/to/skeleton path/to/profile # prepare profile $ delphin process -g path/to/grm.dat path/to/profile # parse it $ delphin compare path/to/profile path/to/gold # find differences
rtest.py script (in the Matrix repository) essentially does this (through the Python API) with some added features for the Matrix’s setup, which also involves customizing a grammar, as well as discovering tests and reporting the results nicely. You might look at that script for an example if you want to adapt it to the matrix, or to see how to print reports with colors, etc.
Thanks Mike! pydelphin said my profile was invalid… separate issue for a separate discussion. It turns out it is really trivial to get this information out of the parse file though. I’m happy to share my script if anyone needs it (it’ll be in the AGGREGATION repo somewhere too).