Extracted-adj rule insisting on extracted or dropped subject; why?

Update: I can get the edge if I remove the #val identity from the supertype, but I am not sure what that means.

And now I don’t get the sibject-head edge. So I cannot get just an adjunct extracted, without also the argument… Can I have one rule that can do both? I still do not get a unification failure! When interactively unifying the head daughter of the subj-head rule. Sorry for being so confused about this. I really appreciate the help.

For now, I added two rules: extracted-adj-only and extracted-adj-last. The first one has a SLASH-list of length 1. This is a bit unsatisfying because in the paper, we say we can have multiple things on the SLASH list easily… Which I suppose we still show, it just seems unsatisfying that we no longer can have one element. So if there was a way of doing both with one rule, that would be great.

I don’t see how it is possible that modifying the extracted adjunct rule, which wasn’t even being used, could cause a subject-head edge that was in the chart to vanish. The only way that conceivably makes sense to me is if packing is still enabled.

Sorry, I wasn’t clear. The extraction rule is being used; it is just that now, I can get where who went but not where the cat went, if that makes sense (unless I add a special rule for extracting just one adjunct and nothing else). I can see the extraction adjunct in the chart for the second sentence but not the subject head.

I did try to disable packing (I parsed between the packing disabling and the printing of the chart, via LKB-Top):

* (setq *chart-packing-p* nil)
NIL
* (print-chart-toplevel)

 > chart dump:

0-1 [7] WHERE => (where) [4]

1-2 [8] CAT => (cat) [5]
1-2 [9] BARE-NP => (cat) [8]

0-3 [15] ADJ-HEAD => (where cat sleeps) [7 14]
1-3 [14] SUBJ-HEAD => (cat sleeps) [9 10]
2-3 [10] SLEEPS => (sleeps) [6]
2-3 [11] EX-ADJ-LAST => (sleeps) [10]
2-3 [12] EX-SUBJ => (sleeps) [11]
2-3 [13] EX-SUBJ => (sleeps) [10]

OK, thank you for clarifying, I think I see the situation you find yourself in at least. I’m sure it is very frustrating to do these interactive unifications and get a fine looking AVM, and still not find it in the chart. Outside of known and prescribed conditions, that should not be possible, so aside from the matter of what may or may not be wrong with your analysis, I would be interested in looking at the grammar live to try to find out why the tools are not doing their job.

1 Like

thank you, Woodley, I will send the grammar to you via email (no way to attach it here).

I will happily look at that.

Another possible avenue for exploration would be to this under ACE with -vvv. You will definitely want to redirect the output to a file, and you will have to do quite a bit of searching in the log to find the relevant information, but you should be able to determine (1) whether ACE considered applying the rule in the situation in question, as evidenced by the presence of a line like:
[#165 [lex 0 orth 0x0] + #463 [lex 0x0]] rule hd_xcmp_c?
(2) whether unification was attempted, as evidenced by a line like:
trying: #463 sign in #265 paired_bracket_rule[0]
and (3) whether unification succeeded, e.g.:
generated #484 1/1 from #252, #463 for [0-2] with rule vp_fin-frg_c: dogs sleep.
You will see both active (incomplete) and passive (complete) edges in the log, which may add to the confusion…

1 Like

So I was forgetting that I must try to unify not only the VP with the head daughter but also the NP with the non-head daughter, in order to see the failure (thanks to Woodley for taking the time to point this out). Sorry for wasting people time on this!

On to investigating the failure which apparently has to do with just a spurious empty SLASH constraint somewhere.

Hmm. I am not seeing anything in this grammar explicitly constraining anything to be SLASH-empty…

Oh, so, the failure is probably because I now messed up the append-list type hierarchy with this new subtype with placeholder:

Screen Shot 2020-07-16 at 2.43.14 PM

In case with where cat sleeps (pseudolanguage), cat’s SLASH is 0-alist but the new version of the adjunct extraction rule which had already applied now made it append-list-with-placeholder.

Do I need more subtypes? How do I properly integrate this new placeholdery subtype into the hierarchy?

Looks like I can do it like so:

0-1-alist := append-list-with-placeholder &
  [ LIST 0-1-list ].

and the rest can maybe stay as is; we’ll see.

Oh, I thought you stopped using the subtypes like 0-alist (I think they are superfluous and lead to bugs lke this). If you want to keep them, you just need to explicitly allow the subtypes, e.g.:

0-alist-with-placeholder := 0-alist & append-list-with-placeholder.

But now I’m confused, because I only suggested using this placeholder type when it’s about to be used in an append, so it should definitely result in a non-empty list. In such a case, if you add the subtype, you’ll still get a failure inside LIST. Have you added this type somewhere else?!

Edit: I see in the above that you’ve added it to extracted-adj-phrase's HEAD-DTR.SYNSEM.NON-LOCAL.SLASH. No placeholder is needed here.

1 Like

Here are all mentions that I now have of it:

append-list-with-placeholder := append-list &
  [ PLACEHOLDER.LIST < [ ] > ].

0-1-alist := append-list-with-placeholder &
  [ LIST 0-1-list ].

0-alist := 0-1-alist &
  [ LIST null ].

1-alist := 0-1-alist &
  [ LIST 1-list ].

extracted-adj-phrase := basic-extracted-adj-phrase &
  [ SYNSEM [ LOCAL.CAT [ WH #wh,
                         VAL.SPEC < >,
                         POSTHEAD #ph,
                         MC #mc ],
	     NON-LOCAL [ QUE #que,
	                 YNQ #ynq,
	                 SLASH append-list-with-placeholder &
	                        [ PLACEHOLDER.LIST < [ CAT
		                                            [ HEAD +rp &
		                                                 [ MOD < [ LOCAL intersective-mod &
                                                               [ CAT [ HEAD #head,
                                                                       POSTHEAD #ph,
                                                                       MC #mc ],
                                                                  CONT.HOOK #hook,
                                                                  CTXT #ctxt ] ] > ],
                                                      VAL [ SPEC < >,
                                                          SUBJ olist,
                                                          COMPS olist,
                                                          SPR olist ] ] ] > ] ] ],
    HEAD-DTR.SYNSEM canonical-synsem &
	   [ LOCAL local &
		   [ CAT [ HEAD #head & verb,
		           WH #wh,
		           VAL [ SUBJ < synsem-min >, SPEC < > ],
			       POSTHEAD #ph,
                   MC #mc ],
                   CONT.HOOK #hook,
                   CTXT #ctxt ],
	     MODIFIED notmod,
	     NON-LOCAL [ QUE #que, SLASH append-list-with-placeholder, YNQ #ynq ] ],
    C-CONT [ HOOK #hook,
         RELS.LIST < >,
	     HCONS.LIST < >,
	     ICONS.LIST < > ] ].
extracted-adj-last-phrase := extracted-adj-phrase &
  [ SYNSEM [ NON-LOCAL.SLASH append-list &
		      [ APPEND < #slash, #placeholder >,
		      PLACEHOLDER #placeholder ] ],
    HEAD-DTR.SYNSEM.NON-LOCAL.SLASH #slash ].

; OZ 2020-07-01 This requires that any arguments are extracted after the adjunct.
extracted-adj-first-phrase := extracted-adj-phrase &
  [ SYNSEM [ NON-LOCAL.SLASH append-list &
		      [ APPEND < #placeholder, #slash >,
		      PLACEHOLDER #placeholder ] ],
    HEAD-DTR.SYNSEM.NON-LOCAL.SLASH #slash ].

should it go away from some of these?

And yes, I could get rid of 0-alists and replace them all with LIST < >. It seemed easier to write but I see what you mean about bugs.

OK thank you again Guy, I cleaned it up and so I now only have two mentions of this new type: the declaration and the one mention in the extracted-adj supertype. All my small tests are now passing! Yay. On to looking at what Russian now does :slight_smile:.

I wrote those types because I was just mechanically replacing diff-lists. But the fact you can just write LIST <> means that you don’t need them. With diff-lists, it’s a different story, because you need to have [ LIST #last, LAST #last ].

It has been a bit annoying to have to write SLASH.LIST < >; SLASH 0-alist looked cleaner at the time. But that really would be the least of my worries right now :sweat_smile:

So yeah, I can get rid of them.

Glad to hear the tests are passing.

If you do want to have other subtypes of append-list, it’s also worth mentioning that the PLACEHOLDER could go somewhere else – I just thought it would keep things clear to have it inside the append-list.

All right, I think I am now observing all the various kinds of extraction that I wanted. Thanks a lot for the help.

Although actually, having two rules for adjunct-first and adjunct-last which can apply to a SLASH list of any length instead of having them for SLASH-lists of length at least one and a special rule for just one adjunct extracted and nothing else, isn’t that great in the sense that now both these rules, adjunct-first and adjunct-last, apply in sentences where there is just one adjunct extracted.

So, perhaps it is better to have separate rules after all (I can’t immediately think of any feature-based way of preventing this from happening)

Good point. For one of the rules, you could specify HEAD-DTR...SLASH.LIST cons

Thinking about the shuffle operator has been an interesting exercise in combining wrapper types with unary rules, to allow non-deterministic computation (creating multiple edges with different outputs). In fact, we could also view SLASH list insertion as a non-deterministic operation.

The way I presented wrapper types before, unifying the “input” data and the computation type triggers a type constraint, and this continues recursively, until we get a computation type that gives a defined result.

We can see a non-deterministic computation type as “missing” some information. If we know this information, we could continue the computation to get a result. So, we can have unary rules that supply different options for this missing information. For each way of completing the computation, we have a different unary rule, and a different output.

The unary rules can also recurse, to allow the number of “branching points” in the computation to depend on the input. For example, in the case of the shuffle operator, we need to have one sequence of unary rules for each permutation. We can decompose a permutation as a recursive series of decisions: we can see a permutation as first choosing which element to put at the start of the list, and then permuting the rest; we can see the choice of an element as maintaining a pointer into the list and choosing to either take the current element or increment the pointer.

Inserting an element somewhere in a list (or popping an element from somewhere in a list) is simpler than shuffling a list – for a list of length n, there are only n+1 options for pushing (and n options, for popping), rather than n! options for shuffling. In terms of the API, it would just look like:

SLASH.PUSH-ANYWHERE < #slash, [...] >

But this operation can only create the output list in the case where #slash is empty. If #slash is non-empty, this operation just sets up a data structure which needs to be the input for a unary rule. It will create a pointer to the beginning of the list, and then two unary rules could apply. One rule will say to insert the element at the current pointer position. The other rule will say to increment the pointer. Once all edges have recursively been added to the chart, there will be one edge for each possible SLASH list. (And also some extra edges for intermediate stages of computation. In this case, there would be n-1 extra edges. It would make sense to add a feature and a constraint to block these intermediate edges from participating in anything other than the non-deterministic computation rules.)

So using these non-deterministic PUSH-ANYWHERE and POP-ANYWHERE operations, we can simulate sets.

1 Like