Get ACE to generate a specific phrase using constraints?

Is there any way to have ACE generate a phrase from a simple mrs that is fully constrained so that it just outputs that one interpretation?

For example, “eat the food” creates this simple MRS:

[ TOP: h0
INDEX: e2
RELS: < [ pronoun_q LBL: h4 ARG0: x3 [ x PERS: 2 PT: zero ] RSTR: h5 BODY: h6 ]
[ pron LBL: h7 ARG0: x3 [ x PERS: 2 PT: zero ] ]
[ _the_q LBL: h9 ARG0: x8 [ x PERS: 3 NUM: sg ] RSTR: h10 BODY: h11 ]
[ _food_n_1 LBL: h12 ARG0: x8 [ x PERS: 3 NUM: sg ] ]
[ _eat_v_1 LBL: h1 ARG0: e2 [ e SF: comm TENSE: pres MOOD: indicative PROG: - PERF: - ] ARG1: x3 ARG2: x8 ] >
HCONS: < h0 qeq h1 h5 qeq h7 h10 qeq h12 > ]

And one solution is this:

                   ┌pron:x3
 pronoun_q:x3,h5,h6┤
                   │                 ┌_food_n_1:x8
                   └_the_q:x8,h10,h11┤
                                     └_eat_v_1:e2,x3,x8

So I thought maybe using an “led” constraint like this would work:

[ TOP: h0
INDEX: e2
RELS: < [ pronoun_q LBL: h4 ARG0: x3 [ x PERS: 2 PT: zero ] RSTR: h5 BODY: h6 ]
[ pron LBL: h7 ARG0: x3 [ x PERS: 2 PT: zero ] ]
[ _the_q LBL: h9 ARG0: x8 [ x PERS: 3 NUM: sg ] RSTR: h10 BODY: h11 ]
[ _food_n_1 LBL: h12 ARG0: x8 [ x PERS: 3 NUM: sg ] ]
[ _eat_v_1 LBL: h1 ARG0: e2 [ e SF: comm TENSE: pres MOOD: indicative PROG: - PERF: - ] ARG1: x3 ARG2: x8 ] >
HCONS: < h5 leq h7 h6 leq h9 h10 leq h12 h11 leq h1 > ]

But the ACE generator generates no responses for it.

Any ideas?

I don’t think ACE does scope resolution at all and similarly does not understand scope-resolved forms of handle constraints. Much of our tooling has so far been content working only in the underspecified form.

You might look into the suggestions we had in the following thread: Is there a way to indicate in MRS that the utterance is active voice?

Maybe I’m really confused but doesn’t it have to to do scope resolution in order to generate utterances from an MRS structure? Otherwise it could just generate garbage right? Just checking my understanding here…

I took at look in that thread. I might be missing something but the suggestions about ICONS don’t apply since we’re talking about handles and holes here right (again, checking my understanding here)?

As for the other suggestions in that thread involving updating the parsing rules…way beyond my current knowledge level. I’ve not yet tried to build rules, just using them.

I coincidentally ordered a copy of “Implementing Typed Feature Grammars” recently. Maybe after a little light reading I’ll be up for the challenge.

Sorry, I should have read more closely. My ICONS suggestion was for a strategy for getting the generator to produce a particular sentence when you have the scope-underspecified MRS, not for a fully specified one.

I looked at the ACE source code and I do see that it can read leq constraints on an MRS, as well as geq (I’m not sure what this is). But I’m also unable to make it work. I’m suspect it’s having trouble mapping the MRS structures to grammar rules for reconstructing a derivation, and this could be partly because the trigger rules expect qeqs. For instance, from mtr.tdl:

light_verb_mtr := monotonic_mtr &
[ INPUT [ RELS.LIST < [ LBL #h1, ARG0 #e2, ARG1 #x3, ARG2 #x4 ],
                  [ LBL #h6, ARG0 #x4 ],
                  [ ARG0 #x4, RSTR #h5 ], ... >,
          HCONS <! qeq & [ HARG #h5, LARG #h6 ] !> ],
  OUTPUT.RELS.LIST < [ LBL #h1, ARG0 #e2, ARG1 #x3 ], ... > ].

If you want to try creating new trigger rules, see this wiki: http://moin.delph-in.net/LkbGeneration

Maybe Francis or Woodley would know more, though.

Also, regarding the MRS, I note that you have nothing linking TOP to the top scope. You could add h0 leq h4 for that. Also, using leq should be the same as actually equating the labels, meaning it’s the same as this:

[ TOP: h0
  INDEX: e2
  RELS: <
    [ pronoun_q LBL: h0 ARG0: x3 [ x PERS: 2 PT: zero ] RSTR: h5 BODY: h6 ]
    [ pron LBL: h5 ARG0: x3 [ x PERS: 2 PT: zero ] ]
    [ _the_q LBL: h6 ARG0: x8 [ x PERS: 3 NUM: sg ] RSTR: h10 BODY: h11 ]
    [ _food_n_1 LBL: h10 ARG0: x8 [ x PERS: 3 NUM: sg ] ]
    [ _eat_v_1 LBL: h11 ARG0: e2 [ e SF: comm TENSE: pres MOOD: indicative PROG: - PERF: - ] ARG1: x3 ARG2: x8 ] >
  HCONS: < > ]

No. The whole point of underspecifying scope is that scope is hard to resolve and most of the time it’s not necessary. In generation, ACE uses the grammar’s MRSs, and the ERG underspecifies scope.

@goodmami Duh, I didn’t even think to just assign the labels instead of using constraints. I’ll try that and see if it works. Thanks!

@guyemerson: Maybe it is my confusion with terminology: In order to generate a phrase, ACE has to take an underspecified variant of MRS and figure out the various ways the predicates can be validly “put together” to form a single tree, ensuring that it doesn’t violate constraints and that variables are properly scoped. For each valid tree that is created, it then generates a phrase from it. Isn’t that “putting together” step called scope resolution?

No. ACE knows nothing about resolving scope. ACE can only perform the mapping defined by the grammar, which in the case of the ERG is between English strings and scope-underspecified MRSes. There may be situations where you can pass a scope-resolved MRS into ACE and get the same generation results out as you would have if you had passed the underspecified form in, although I don’t know the current situation regarding that – it depends on (1) the fact that variable uniqueness is enforced by differing INSTLOC features and the fact that the qeq type intentionally identifies the two sides’ INSTLOCs and (2) the fact that ACE doesn’t (or didn’t used to? I don’t remember) check whether HCONS is subsumed when filtering generation results. But that is a bit of a technicality.

The more important point to make regarding this whole thread is that generating from a scope resolved MRS would not usually give you a different set of realizations than generating from the underspecified MRS. There is no advantage, unless it’s simply easier for you to produce scope resolved forms. Certainly for your “eat the food” example, you wouldn’t be ruling out any realizations, and in general fixing the order of quantifiers does not constrain the syntax in the ERG.

Regarding ACE’s ability to read MRS structures containing “leq” constraints: yes, it can, but again, that doesn’t mean it can interpret them. It would only be useful if the ERG produced MRSes with “leq” constraints, which it doesn’t.

2 Likes

Roughly speaking, generation in ACE is like this: given an MRS, it looks up the predicates in the grammar, to get lexical entries. It then tries to put them together using rules in the grammar. For each way that the lexical entries can be put together, it compares the resulting MRS with the original MRS. If they match (with a little wiggle room), that’s a valid generation. The ERG produces MRSs which are underspecified for scope, so ACE is always comparing these underspecified MRSs.

As @goodmami and @sweaglesw have pointed out, the ERG doesn’t resolve scope, so constraining scope won’t constrain the grammar – but it might confuse ACE.

Wow, mind blown. That is so not the way I was imagining ACE generated phrases. OK. thanks for the explanation @guyemerson.

“The more important point to make regarding this whole thread is that generating from a scope resolved MRS would not usually give you a different set of realizations than generating from the underspecified MRS. There is no advantage, unless it’s simply easier for you to produce scope resolved forms. Certainly for your “eat the food” example, you wouldn’t be ruling out any realizations, and in general fixing the order of quantifiers does not constrain the syntax in the ERG”

@sweaglesw, it seems like you are saying this is true in general and not an artifact of the way ACE works. I was under the (it sounds like mistaken) impression that scope resolution would actually constrain the surface syntax (or at least more than you are saying it will). Sounds like scope resolution is only constraining the semantic interpretation.

I guess this makes sense when I think about how I would reword “Every dog chases some white cat.” so that it doesn’t have scope ambiguity. Everything I come up with is a pretty radically rewording, not a messing around with word order and punctuation.

I think this all indirectly solved my problem too. I was imagining that I could use ACE to write back how my engine “semantically interpreted” the ambiguous English phrase. I’m realizing now that probably the only form that is going to really do this is the logical form.

Thanks for the help!

1 Like