Any way to make a mal-rule dispreferred in general?

For a system prioritizing precision, it would be good if the grammars equipped with mal-rules only used them when they can’t otherwise parse the sentence (or something like that).

Have people (@Dan ?) come up with ways to ensure that? I have a parse ranking model for the SRG which I hoped would help but it doesn’t seem to (or not as much as I would hope). For full information, my mal-rules are inflectional lexical rules.

You could add weights by hand, but I think it is better to just treebank more.

Suppose retraining on a bigger treebank is not feasible given time constraints; how would I add those weights manually, what would that look like, is there an example anywhere?

I agree that ideally one would annotate a treebank containing example sentences that make use of mal-rules, and train a parse selection model that would presumably favor well-formed analyses unless a sentence contained an error requiring use of a mal-rule. I have not ever taken the time to produce such a treebank for the ERG’s mal-grammar, so can hardly complain that others don’t, even though it is surely the better approach. Instead, in the ERG I have manually added constraints to both mal-rules and normal rules that reduce the frequency of those mal-rules appling to well-formed structures, though this is also time-consuming and reduces the robustness of the grammar on well-formed text, since often it is the ‘normal’ rules (or lexical types) that have to be constrained to prevent a particular mal-analysis of a grammatical sentence. This approach worked well enough in the constrained language learning environments that the mal-ERG was used for, but did not generalize well to analysis of open-text essays.

If you want to experiment with assigning (rather arbitrary) weights by hand, and you are using a maxent model for the regular grammar (using a model stored as a file called something.mem), it might be possible to simply add lines to that file for your mal-inflectional rules where you experiment with different weights. For example, the ERG’s redwoods.mem contains lines such as the following:
(1447) [1 (1) v_n3s-bse_ilr v_-_le “sleep”] 0.211680 {0 0 0 0} [0 0]
so if there was a mal-version of the inflectional rule v_n3-bse_ilr that was called v_n3-bse-mal_ilr, you could try adding a similar line substituting the mal-rule’s name for the normal one, and choosing some (presumably smaller) weight (the “0.211680” above). I suspect that anyone who actually knows something about maxent models would disapprove of such experimentation, and there might well be good reasons why such human meddling could not work, but until they speak up to save you from folly, the experimenting shouldn’t be too hard to try.



I think perhaps this is what I need at this point. Where should I start looking, to get an idea? Can you give me any example sentences whose parse would allow me to track the mechanism?