Not everyone is able to walk a lion

Hi @Dan ,

Is it right to have _not_deg_x sharing the label with every_q in the MRS below?

[ TOP: h0
  INDEX: e2
  RELS: < [ _not_x_deg<0:3> LBL: h4 ARG0: e5 ARG1: x3 ]
          [ person<4:12> LBL: h6 ARG0: x3 ]
          [ every_q<4:12> LBL: h4 ARG0: x3 RSTR: h7 BODY: h8 ]
          [ _able_a_1<16:20> LBL: h1 ARG0: e2 ARG1: x3 ARG2: h9 ]
          [ _walk_v_1<24:28> LBL: h10 ARG0: e11 ARG1: x3 ARG2: x12 ]
          [ _a_q<29:30> LBL: h13 ARG0: x12 RSTR: h14 BODY: h15 ]
          [ _lion_n_1<31:35> LBL: h16 ARG0: x12 ] >
  HCONS: < h0 qeq h1 h7 qeq h6 h9 qeq h10 h14 qeq h16 > ]

('h13', {'h14': 'h16', 'h15': 'h4', 'h7': 'h6', 'h8': 'h1', 'h9': 'h10'})
('h4', {'h7': 'h6', 'h8': 'h1', 'h9': 'h13', 'h14': 'h16', 'h15': 'h10'})
('h4', {'h7': 'h6', 'h8': 'h13', 'h14': 'h16', 'h15': 'h1', 'h9': 'h10'})

Note that all possible scope resolutions I got from utool: The Swiss Army Knife of Underspecification will not impose that _not_x_deg should be under the scope of every_q that introduces x3

I have other cases to ask for your comments on, all of them from the SICK text entailment dataset, it contains 6077 sentences (http://www.lrec-conf.org/proceedings/lrec2014/pdf/363_Paper.pdf). I have a few cases of MRS that Utool complained as being ill-formed and around 35 cases of sentences that were not parsed by ERG+Ace.

(The Discourse site seems to be down at the moment, so let’s see if the email-reply feature works…)

Yes, the MRS for “not everyone” is as intended, with that strange minimal linking of the “not” EP with that of “every” by the mere sharing of their labels and nothing else. There isn’t any place in the scope tree under “every” where that “not” could go - it shouldn’t outscope either the restrictor or the body. A more reasonable account would be to somehow combine the two predicates of “not” and “every” into a single EP supplying a new complex quantifier, but we don’t currently have a formally solid proposal for how to do that kind of predicate-merging in MRS (as far as I know). See the discussion on the EdsTop wiki page (github.com/delph-in/docs/wiki/EdsTop), in the section on Graph Connectivity, for more explanation of this problem, and for a possible approach to handling these graphs that UTool rightly objects to.

I’ll be glad to discuss the other UTool-incompatible examples you have collected, to see if some reveal bugs in the current ERG, or possible shortcomings in the alignment of UTool and our implementations of MRS.

1 Like

Hi @Dan , sorry for this late reply. I have the full SICK dataset processed with ERG at:

https://github.com/arademaker/sick-fftb.

In this repo I actually did you things.

  1. I made a sample of 100 sentences and evaluate the performance of the pre-trained parse ranking model of ERG. For that, I did the treebanking with FFTB and compared the results to the ERG output with https://github.com/delph-in/delphin.edm

  2. I process all sentences asking for the top 10 readings. For each MRS I tried to resolve the scope of the quantifiers using the Utool. During this process, I log the errors in the https://github.com/arademaker/sick-fftb/blob/master/solver.txt

Note that in most of the cases, the ‘invalid’ MRS is the the first reading. But https://github.com/arademaker/sick-fftb/blob/master/solver.txt#L29 is a case where in the first reading (0/10) the MRS could not be processed by the Utool.

In https://github.com/arademaker/sick-fftb/blob/master/not-parsed.txt I have all the sentences that ERG was not able to parse.

Thanks for the analysis and the data. I will go through them more systematically, but looking at a few of the examples, it seems that Utool is unhappy with the ERG’s analysis for nominalizations as in the “fishing” of “…are holding fishing poles”. Here the EP for “fish” does not have any of its arguments filled, and its label is the ARG1 of the nominalization EP. Can you suggest a better target MRS representation for this construction that would please Utool?