(This is just some notes, not a concrete question or proposal.)
Currently in the Grammar Matrix, I can in principle specify verb types based on their inflectional properties, but then it is position classes that constrain inputs, and this means I need a separate position class for e.g. tense which goes with verb1, another position class for tense for verb2, etc. And in most cases, each lexical rule type within each PC will have just one lexical rule instance (for a language like Russian).
If I could have one position class for tense/pernum but constrain the lexical rule instances somehow with respect to which inflectional class they go with, that would be so much more economical…
I am sure there are solid reasons not to do this though (or it is just impossible due to something structural in the hierarchy/underlying type definitions which are in turn well-motivated)? Like I said, I don’t really have a clear picture or an opinion; this is just some thoughts that I keep having whenever I need to model even a tiny fragment of something like Russian morphology…
We definitely can’t push the constraints down onto the instances (I mean we could, but as style point we try to keep constraints on types, not instances), but I suppose you could use requires constraints and push it down to types within a PC. But a PC is just a kind of a type so I’m not sure why that would be preferable. Can you elaborate on why you find this uneconomical?
Ooh! The require/forbid constraints! I keep forgetting about them!
They seem to be helpful; at least it seems like I can have e.g. one pernum position class and separate LRT for different conjugations.
I guess I keep thinking, PC is also a slot, conceptually, and I am having a reaction of sorts to having to create multiple PCs for what seems the same slot. But yeah, I see of course how, if I have several nodes in the graph for different types of stems, then I must have a node representing, say, a pernum PC for each.
I suppose it just seems annoying to create several dozen of those for something like Russian, with 6 lexical rules in each, all specifying the same features again and again, and each LRT having only one LRI. But that’s just a data entry problem I think, not a theoretical one.
Anyway, I think the require/forbid constraints work. Would they sort of be considered syntactic sugar of sorts?.. I guess not always, right since we don’t handle the require/forbid constraints in, say, MOM. And if we were, we would likely map them to different nodes in the graph?
The requires constraints are actually not just syntactic sugar. They function differently than the notion of “input” of a position class: The input determines what can serve as the DTR of the rule as is implemented in terms of the -dtr types. A requires (or forbids) constraint is handled via FLAG features inside INFLECTED. One key difference is that a requires constraint can be non-local, i.e. not refer to the immediate input.
Sorry it’s been too long for me to recall all the details, but the basic idea is that the inputs determine the slot order and the co-occurrence restrictions constrain the co-occurrence of slots (sorry that’s a bit tautological). I think a “position class” is just an expository abstraction as in the grammar it’s just a lexical rule type that is the supertype of other lexical rule types, and inputs can go into the position class supertype or into its subtypes. And in addition to non-local constraints as Emily mentioned, you can also use co-occurrence restrictions at different levels, e.g., you could have one lexical rule (sub)type or instance in a position class require a specific lexical rule (sub)type or instance in another position class.
Pruning inputs to model co-occurrence restrictions was posited as a future line of work in my UWWPL paper as it could lead to a more efficient grammar with fewer flags to copy up on rule applications, but I did not fully implement the code to do this automatically. Also, it may make the grammar harder to maintain in the long run if it complicates later changes to the morphotactics of a grammar, but that’s just a hunch.
See Table 2 on page 20 of the UWWPL paper to see how the 3 restrictions have different uses: https://depts.washington.edu/uwwpl/vol30/goodman_2013.pdf#page=20
Also note that “force” (forward-require) and “require” (backward-require) are implemented differently, but there is only one “forbid”, as the implementation is the same whether A forbids B or B forbids A. Table 3 (also page 20) shows the flag values that make that possible.