Understanding neg(e, h): "which rocks are not blue"

I’m working to understand how to interpret the neg(e, h) predication as part of writing up some documentation. I’m using what I thought was a straightforward example: “which rocks are not blue?”. Below are the first four parses from ACE.

Can anyone give me an interpretation of these trees below to give me an intuition on how this predication works? Or maybe a pointer to any background on how it is being used in the ERG? I’ve found lots of discussion on negation in the wiki, but it seems to be about even more advanced uses. I’m trying to understand what I think is the most basic use of it.

I think my base problem is how to think about the fact that neg(e, h) takes a scopal arg. What is being negated in the tree it has scope over?

The forth parse has a structure that looks most like what I’d expect, where only the _blue_a_1 predication is negated, but no variables in that part of the tree parse are shared with the rest of the tree, so I’m doubly confused about that one.

[ TOP: h0
INDEX: e2
RELS: < 
[ neg LBL: h1 ARG0: e8 [ e SF: prop TENSE: pres MOOD: indicative PROG: - PERF: - ] ARG1: h9 ]
[ _which_q LBL: h4 ARG0: x3 [ x PERS: 3 NUM: pl IND: + ] RSTR: h5 BODY: h6 ]
[ _rock_n_1 LBL: h7 ARG0: x3 [ x PERS: 3 NUM: pl IND: + ] ]
[ _blue_a_1 LBL: h10 ARG0: e2 [ e SF: ques TENSE: pres MOOD: indicative PROG: - PERF: - ] ARG1: x3 ]
>
HCONS: < h0 qeq h1 h5 qeq h7 h9 qeq h10 > ]

                         ┌─────── _rock_n_1(x3)
        ┌── _which_q(x3,RSTR,BODY)
neg(e8,ARG1)                  └── _blue_a_1(e2,x3)
[ TOP: h0
INDEX: e2
RELS: < 
[ subord LBL: h1 ARG0: e14 [ e SF: prop ] ARG1: h15 ARG2: h16 ]
[ neg LBL: h11 ARG0: e12 [ e SF: prop TENSE: pres MOOD: indicative PROG: - PERF: - ] ARG1: h13 ]
[ udef_q LBL: h4 ARG0: x5 [ x PERS: 3 NUM: pl IND: + ] RSTR: h6 BODY: h7 ]
[ _rock_n_1 LBL: h8 ARG0: x5 [ x PERS: 3 NUM: pl IND: + ] ]
[ _be_v_id LBL: h9 ARG0: e2 [ e SF: prop TENSE: pres MOOD: indicative PROG: - PERF: - ] ARG1: x5 ARG2: i10 ]
[ _blue_a_1 LBL: h17 ARG0: e18 [ e SF: prop ] ARG1: i19 ]
>
HCONS: < h0 qeq h1 h6 qeq h8 h13 qeq h9 h15 qeq h11 h16 qeq h17 > ]

                                             ┌─────── _rock_n_1(x5)
                              ┌── udef_q(x5,RSTR,BODY)
            ┌─────── neg(e12,ARG1)                └── _be_v_id(e2,x5,i10)
subord(e14,ARG1,ARG2)
                 └── _blue_a_1(e18,i19)
[ TOP: h0
INDEX: e2
RELS: < 
[ neg LBL: h1 ARG0: e10 [ e SF: prop TENSE: pres MOOD: indicative PROG: - PERF: - ] ARG1: h11 ]
[ udef_q LBL: h12 ARG0: x9 [ x PERS: 3 NUM: sg ] RSTR: h13 BODY: h14 ]
[ _blue_a_1 LBL: h15 ARG0: x9 [ x PERS: 3 NUM: sg ] ARG1: i16 ]
[ _which_q LBL: h4 ARG0: x3 [ x PERS: 3 NUM: pl IND: + ] RSTR: h5 BODY: h6 ]
[ _rock_n_1 LBL: h7 ARG0: x3 [ x PERS: 3 NUM: pl IND: + ] ]
[ _be_v_id LBL: h8 ARG0: e2 [ e SF: ques TENSE: pres MOOD: indicative PROG: - PERF: - ] ARG1: x3 ARG2: x9 ]
>
HCONS: < h0 qeq h1 h5 qeq h7 h11 qeq h8 h13 qeq h15 > ]

                        ┌─────── _blue_a_1(x9,i16)
         ┌── udef_q(x9,RSTR,BODY)             ┌─────── _rock_n_1(x3)
neg(e10,ARG1)                └── _which_q(x3,RSTR,BODY)
                                                   └── _be_v_id(e2,x3,x9)
[ TOP: h0
INDEX: e2
RELS: < 
[ subord LBL: h1 ARG0: e11 [ e SF: prop ] ARG1: h12 ARG2: h13 ]
[ udef_q LBL: h4 ARG0: x5 [ x PERS: 3 NUM: pl IND: + ] RSTR: h6 BODY: h7 ]
[ _rock_n_1 LBL: h8 ARG0: x5 [ x PERS: 3 NUM: pl IND: + ] ]
[ _be_v_id LBL: h9 ARG0: e2 [ e SF: prop TENSE: pres MOOD: indicative PROG: - PERF: - ] ARG1: x5 ARG2: i10 ]
[ neg LBL: h14 ARG0: e15 [ e SF: prop ] ARG1: h16 ]
[ _blue_a_1 LBL: h17 ARG0: e18 [ e SF: prop ] ARG1: i19 ]
>
HCONS: < h0 qeq h1 h6 qeq h8 h12 qeq h9 h13 qeq h14 h16 qeq h17 > ]

                                ┌─────── _rock_n_1(x5)
            ┌─────── udef_q(x5,RSTR,BODY)
            │                        └── _be_v_id(e2,x5,i10)
subord(e11,ARG1,ARG2)
                 │            ┌── _blue_a_1(e18,i19)
                 └── neg(e15,ARG1)

And there are other parses which seem more esoteric.

The truth value of the tree.

This isn’t straightforward, mainly because of “which”, also because of “are blue”. _which_q looks like a quantifier, but wh-words don’t behave exactly like true quantifiers… and that is a longer story. In some of the parses, “which” is not interpreted as a question word. And some of the parses include _be_v_id.

The first parse is what you want.

1 Like

I agree that the first reading is the one that you want. Also your ascii tree visualization is not fully motivated: the which_q parameter, though not exactly a quantifier, doesn’t have fixed scope in the MRS. The choice to put it under rather than above neg_rel seems to be done by whatever system is giving you those ascii trees.

I was afraid you’d say that :slight_smile: . I’m probably going to butcher a lot here, but I’m hoping it’ll help you point me in the right direction. Here’s how I am thinking about this:

If I use an example model: {house, green rock, blue rock}, and assume the first parse is the one that means “tell me the rocks that are not blue”, we want the truth condition of the phrase to end up being x5={green rock}.

If the truth condition of the tree for “which rocks are blue” is x5={blue rock}, then I would naively assume negating the whole tree that neg has scope over would mean the truth condition would be “all things in the model that are not a blue rock”, namely: x5={house, green rock}.

It seems like the only way to make it have a truth condition of x5={green rock} would be if neg is really only negating _blue_a_1 effectively changing it into a not_blue_a_ predication.

Looking at (what seems like) an analogous phrase “which person does not have shoes?”, the same form of MRS is generated, but in this case I “intuitively” get the right truth condition by interpreting the neg over the whole tree as only applying to _have_v_1 (make it effectively a not_have_v_1 predication).

So my (sample size 2) theory of neg would have been something like “neg means invert the logic of the index of the tree it applies to” (since _blue_a_1 was also the index of the original).

Where am I going wrong here?

I think the tree I showed is one of the valid scope resolved trees given the HCONS in the MRS, isn’t it? I put a “handle labeled” tree in the diagrams below and unless I’m missing something those are the only two valid trees that can get generated from this MRS. Thank you for pointing out I was missing one!

The second one makes much more sense to me and aligns with the description of what I thought should be going on that I wrote in my response to Guy.

Does this mean that the first scope resolved tree below really does have the meaning “what is not a blue rock?”

[ TOP: h0
INDEX: e2
RELS: < 
[ neg LBL: h1 ARG0: e8 [ e SF: prop TENSE: pres MOOD: indicative PROG: - PERF: - ] ARG1: h9 ]
[ _which_q LBL: h4 ARG0: x3 [ x PERS: 3 NUM: pl IND: + ] RSTR: h5 BODY: h6 ]
[ _rock_n_1 LBL: h7 ARG0: x3 [ x PERS: 3 NUM: pl IND: + ] ]
[ _blue_a_1 LBL: h10 ARG0: e2 [ e SF: ques TENSE: pres MOOD: indicative PROG: - PERF: - ] ARG1: x3 ]
>
HCONS: < h0 qeq h1 h5 qeq h7 h9 qeq h10 > ]

                               ┌─────── h7:_rock_n_1(x3)
                               h5
           ┌── h4:_which_q(x3,RSTR,BODY)
           h9                       h6
h1:neg(e8,ARG1)                     |
                                    └── h10:_blue_a_1(e2,x3)


                ┌─────── h7:_rock_n_1(x3)
                h5
h4:_which_q(x3,RSTR,BODY)
                     h6
                     |
                     h1:neg(e8,ARG1) 
                                 h9
                                 └── h10:_blue_a_1(e2,x3)

I don’t know what it means for the which_q parameter to scope under negation. For the second of you trees, I’d gloss it as “For which rock is it not the case that the rock is blue?”

I don’t think we talk about questions having truth conditions. You might say that the answer you are looking for is one that makes a true statement if you replace “which x” in the question with that answer.

Is this a grammar bug that I should report?

OK, good to know. I’ve been treating questions and propositions (and commands for that matter) the same in terms of how their MRS is “solved” meaning: determining how (or if) their variables can be set to instances in the world that make the MRS “true”. I’ve ended up treating the WH-predicate (i.e. _which_q) as providing the same semantics as _def_implicit_q (meaning just scoping a variable) and indicating which variable the asker would like to know about. All that it is to say, it seemed like the same underlying mechanism of truth condition finding.

No. The grammar doesn’t resolve scope ambiguities. It’s simplest to assume that _which_q always takes wide scope, but it doesn’t make sense to try and encode this directly in HCONS. (Wide scope roughly matches how Ginzburg and Sag propose to represent the semantics of questions in “Interrogative Investigations”.)

Calculating a truth value does not mean assigning values to variables. In some sentences, it is possible to do this, e.g. “Some rock is blue” is true if we can find x such that x is a rock and x is blue (in logical terminology, such an x is a “witness”). However, for a sentence like “Every student speaks two languages”, the languages vary according to the student, and so there is no way to represent the semantics merely in terms of a set of students and a set of languages.

Compositional semantics of “which rocks are not blue?”:

  • [[rocks]]: a function from entities to truth values
  • [[blue]]: a function from pairs of entities to truth values
  • [[blue]] with implicit quantification of the event variable: a function from entities to truth values
  • [[are not blue]]: a function from entities to truth values (the inverse of the above)
  • [[which rocks are not blue?]]: a question targeting one free variable, with [[rocks]] as the restriction and [[are not blue]] as the body

In this particular case, if we ignore the event variable, we’re just dealing with unary functions, which can also be represented as sets of entities (given a model). But doing that in the general case would require sets of tuples of entities.

I could only find reviews of that book online, so I may have to increase my bedside reading stack…I do note that modeling _which_q as having wide scope (or any scope for that matter) hasn’t caused any problems for the approach I am using to “solve” (i.e. “find witnesses for variables in”) question MRSs. It was the negation of it that I couldn’t understand. I assume this book covers negation as well since it appears quite comprehensive.

I suspect this is a case of me butchering the terminology. Yes, the approach I am taking is proving that an MRS is true/false by having each predication attempt to find values in the world model for its variables that make it true and this often leads to an answer that is a set of sets of variable assignments.

For example : In the case of “Every student speaks two languages”, one resolved tree of the initial reading is “every student speaks (the same) two languages”:

[ TOP: h0
INDEX: e2
RELS: < [ udef_q LBL: h10 ARG0: x9 [ x PERS: 3 NUM: pl IND: + ] RSTR: h11 BODY: h12 ]
[ card LBL: h13 CARG: "2" ARG0: e15 [ e SF: prop TENSE: untensed MOOD: indicative PROG: - PERF: - ] ARG1: x9 ARG2: h9001 ]
[ _language_n_1 LBL: h13 ARG0: x9 [ x PERS: 3 NUM: pl IND: + ] ]
[ _every_q LBL: h4 ARG0: x3 [ x PERS: 3 NUM: sg IND: + ] RSTR: h5 BODY: h6 ]
[ _student_n_of LBL: h7 ARG0: x3 [ x PERS: 3 NUM: sg IND: + ] ARG1: i8 ]
[ _speak_v_to LBL: h1 ARG0: e2 [ e SF: prop TENSE: pres MOOD: indicative PROG: - PERF: - ] ARG1: x3 ARG2: x9 ]
>
HCONS: < h0 qeq h1 h5 qeq h7 h11 qeq h13 > ]

                                              ┌ _language_n_1__x:x9
                 ┌───── card__cexh:2,e15,x9,ARG2
udef_q__xhh:x9,RSTR,BODY
                      │                    ┌───── _student_n_of__xi:x3,i8
                      └ _every_q__xhh:x3,RSTR,BODY
                                                └ _speak_v_to__exx:e2,x3,x9

The system I have been using would result in an answer which contains all the sets of assignments that were required for the statement to be true as it was being proved. Assuming only two students, and those two speak two overlapping languages, that is something like:

{{x3=joan, x9=french}, {x3=joan, x9=chinese}, {x3=lenny, x9=french}, {x3=lenny, x9=chinese}}

Looping back to your original point:

I understand one way to model the case that you’ve stated here, where “not blue” gets tree resolved as neg(blue_a_1) in the BODY: I literally model neg as modifying its scopal predicate blue_a_1, effectively turning it into a not_blue_a_1 predication. That’s a straightforward transformation.

I’m still unclear what it means in the resolved tree where neg has scope over the entire rest of the tree like neg("which rocks are blue") as in my original example (the case that Emily said she didn’t know the meaning of, repeated below). Do you have any thoughts on that (or was that what the book will answer)?

When I say _which_q should take “wide scope”, I mean it should appear at the top of the tree (or for embedded questions, the top of the relevant subtree). In the tree you’ve given, _which_q doesn’t take wide scope! That’s why Emily and I said we don’t think it makes sense, and it also won’t correspond to anything in Ginzburg and Sag’s analysis.

The difficult case, in terms of assigning values to variables, is when _every_a takes wide scope. Following your example, it should be true for a situation like: {{x3=joan, x9=french}, {x3=joan, x9=mandarin}, {x3=lenny, x9=german}, {x3=lenny, x9=hokkien}}. If your system can deal with that, then it’s probably doing the right thing.

Ahhh, got it. Thanks for clarifying!

Yes, it handles that case just fine. However, I just realized to do it my system is had to convert card to a different form. You’ll see in the above MRS it actually converted it to have a handle variable. The text below is the same resolved tree as the above, but using the proper MRS:

                            ┌─ card__cex:2,e15,x9
                 ┌───── and:0,1
                 │            └_language_n_1__x:x9
udef_q__xhh:x9,RSTR,BODY
                      │                    ┌───── _student_n_of__xi:x3,i8
                      └ _every_q__xhh:x3,RSTR,BODY
                                                └ _speak_v_to__exx:e2,x3,x9

I’ll post a different question about that since it has always confused me.