Converting MRS output to a logical form

OK, I’m really new to Linguistics and NL* (but not software engineering in general), so I’m sorry if this is naive: I’m trying to do a logic based analysis on English sentences using the MRS output of the ERG.

Currently, I’m just trying to convert the MRS output for a sentence into any logical (or semi-logical) form to get my head around the problem. I would appreciate any pointers to docs that describe how to do this, or how someone did it, in any domain.

Here’s where I’ve gotten and some issues I’ve had to illustrate what I’m looking for:

Producing Valid Trees from the MRS
I believe I am now currently putting together all valid forms of an MRS tree, in a predicate logic “ish” form:

Where are you?

pronoun_q(x5, which_q(x4, place_n(x4), pron(x5)), loc_nonsp(e3, x5, x4))
which_q(x4, place_n(x4), pronoun_q(x5, pron(x5), loc_nonsp(e3, x5, x4)))
which_q(x4, pronoun_q(x5, pron(x5), place_n(x4)), loc_nonsp(e3, x5, x4))
pronoun_q(x5, pron(x5), which_q(x4, place_n(x4), loc_nonsp(e3, x5, x4)))

First Issue: Logical form of General Qualifier Predicates
I’m now trying to drill down on implementing the predicates. What I really need is a definition of what each quantifier predicate is doing logically. I.e. the classic “Everyone” predicate might be defined like this:

Everyone -> {X : a ∈ X}

My assumption (probably wrong) is that the quantifier predicates (the ones with RSTR and BODY) each represent a Generalized Quantifier which has a formal logic function of some kind and the people writing the rules of the grammar are effectively converting utterances into a structure that uses these to represent the logical form of the utterance.

Maybe all of these predicates just represent Generalized Quantifier forms that are “just known” in the linguistics community and listed in a bunch of academic publications? If so, I’d love any pointers you have to a good source that might compile them up as a reference…

Second Issue: evaluating Generalized Quantifier predicates
My (possibly very naive) confusion is in understanding how the scoping of quantifiers works. For a statement like this:

pronoun_q(x5, pron(x5), which_q(x4, place_n(x4), loc_nonsp(e3, x5, x4)))

I believe I should process the “pronoun_q” predicate (conceptually) like this:

  1. Start with the entire universe of things in x5.
  2. Restrict x5 down to be just those that have the property “pron”.
  3. Run the quantifier “pronoun_q” on that set to produce a quantified set by doing…whatever pronoun_q does…and place that into x5
  4. Finally, use the value of x5 in the body of the quantifier wherever that variable is specified.

If I’m just way off base with this, I’d love any pointers that will help me understand better.

Assuming I’m basically on the right track, I’m struggling with how to process the “which_q” predicate of this one:

pronoun_q(x5, which_q(x4, place_n(x4), pron(x5)), loc_nonsp(e3, x5, x4))

For this case, processing the restriction, place_n(x4), and the predicate, which_q, proceeds as before. But: I don’t understand how to think about the body since it doesn’t include the x4 variable in it at all. Does that mean it is simply not affected by the RSTR?

Again, could be way off track here, I’m swapping in a lot of background…

Sorry for the long post, appreciate any pointers in the right direction!

Hi Eric – part of what’s going on here is that every x-type variable must be bound by some quantifier. This means we have to put in quantifiers for pronouns, which is what pronoun_q is. You can think of it as a species of existential quantifier. which_q goes with wh question words (here where) and indicates that this is a parameter of a wh question.

It looks like we haven’t yet drafted the Quantifiers page of the ERG Semantics documentation (, but that’s where this information should be.

One quick point: x4 is in the body. It’s the third argument of loc_nonsp. There is a bit of documentation for that one:

Very helpful, thanks @ebender. To your last point, I was confused about how to evaluate the “which_q” quantifier, not the “pronoun_q” quantifier. I.e.:

pronoun_q(x5, which_q(x4, place_n(x4), pron(x5)), loc_nonsp(e3, x5, x4))

            which_q(x4, place_n(x4), pron(x5))

x4 isn’t in the body of “which_q”, so I’m trying to wrap my head around how to evaluate it. Maybe I’m messing up the order of operations in how I’m interpreting things?

You are not enumerating the scoped MRS trees correctly. One of the requirements of any logical formula (and made explicit in the MRS paper’s discussion of the relationship between MRS and scope-resolved LFs) is that any appearance of a variable in a (sub)formula must be outscoped by the quantifier that binds that variable. This does not hold for two of your LFs, namely the ones in which you put the second quantifier inside the restriction of the first quantifier. The main assertion of the sentence (loc_nonsp) refers to both x4 and x5, and so it has to be outscoped by both quantifiers. As you already noticed, the interpretation is somewhat clearer for the other two LFs, which are the valid ones.

The actual interpretation of the generalized quantifiers that the ERG produces is considered beyond the scope of the grammar. ERG considers its main job to be describing and operationalizing the relationship between surface form and predicate-argument structure, rather than lexical semantics, so you will find quantifiers like most_q, no_q, every_q, some_q, and a host of others, of which some have “generally accepted” interpretations among semanticists and many don’t.

Ahhh. Perfect. Thank you! I thought that filling unbound handles (holes) with top level “floating” handles when they fulfilled the hcons was enough to guarantee a valid tree. I guess i missed that extra step. Or maybe i just have a bug in my code.

Also thanks for clarifying the approach to predicates.

If you are doing this yourself, I want to make sure you’re aware of the LKB’s ability to do this as well. If you parse something with the LKB then from a parse tree select “Scoped MRS”, you’ll get exactly the 2 (valid) scopes that you listed (in exactly the same format, too, which makes me surprised if you aren’t already using the LKB for this). There’s also a “FOL approximation” option but it is not currently working for me (with LKB-FOS).

And if you happen to be coding in Python and want to see your code merged into PyDelphin, I’d be happy to help with that process. There’s a scope module, but it doesn’t enumerate fully-scoped MRSs yet. I had a partially working scope resolver some time ago, then my wife and I had twins and priorities were shifted. I have some notes in issues #241 and #221.

I’m afraid I don’t know what’s wrong with the LKB’s “FOL approximation”. I’m seeing the same behaviour in the ‘classic’ LKB as LKB-FOS. Some sentences do get some sort of output, e.g. “Every farmer owns a donkey.” but others don’t. It seems there are problems if a sentence contains pronouns, proper names, bare NPs, or numerals as determiners.

I don’t believe the FOL approximation could ever have worked for
arbitrary sentences - lots of examples don’t have a FOL
interpretation. One could push this further - e.g., simply by
making the udef_q that one finds in bare NPs (and numerals as
determiners) equivalent to an existential, as well as pronoun_q
But this presupposes that the FOL has a structure on the variables
similar to that proposed by Link, so it’s not any version of
classical FOL. And obviously this isn’t the correct analysis for
many cases of bare plurals. As far as I remember, when I did it
originally, I didn’t want to introduce any complications of that
sort - it was just an illustration for the classical FOL examples.

All best,


Yes, I see. The code in src/tproving/qq-to-fol.lsp doesn’t attempt to deal with udef_q, pronoun_q or proper_q. Perhaps I could add a warning to the FOL approximation if any of these are encountered. If one wanted to hack around this, then the functions to look at are gq-quantifier-exp-p and convert-gq-quantifier.

No, of course, there are many more quantifiers that are not processed than just the three I mentioned, so any check would have to be more sophisticated.

I am doing this myself @goodmami, in Python using PyDelphin at the moment. Once I get it working I’ll write up a blog entry on it and see if there is any code worth putting into PyDelphin. Thanks for the offer to help!

I haven’t installed and messed around in the LKB yet. That’s in the queue…