Edge can be built interactively, but isn't in the chart

This is really interesting! I had wondered if the APPEND feature would cause problems somewhere…

I agree that this is a problem for append-lists and not diff-lists. The re-entrancies in a diff-list are only relevant if you’re looking at a structure large enough to contain both the resulting list and the original lists. With an append-list, that happens immediately.

Given that we already have features like ARGS that carry around the derivation history, this doesn’t seem like a formal problem to me (although I will admit that ARGS is further away in the structure).

It would be possible to move all of the append calculations further away, but this would make the syntax awkward, which would go against my original motivation… We could have an APPEND feature at the top level (like ARGS), which contains a list of list-appends, as pairs of (resulting list, list of lists to be appended). This would be a solution if it’s important that both: 1. we can close lists and still append to them, and 2. we cannot have the derivation history “nearby”.

Thinking more generally – if this solution is appealing, we could introduce a RELATIONAL feature at the top level, which has a feature for each type of “relational constraint” defined in the type hierarchy, i.e. what I called “wrapper types” in my talk at the last summit. So RELATIONAL.APPEND contains a list of the list appends used in the current feature structure, and then if we also want to have logical or numeric operations, we could have RELATIONAL.AND, RELATIONAL.SUM, etc.

If someone is writing constraints about rules’ daughters’ lists’ computational histories… well, good luck to them :stuck_out_tongue:

And finally, if the number of coordination rules is causing a problem, wrapper types should be able to cut this down somewhat (at least at the level of an individual grammar, if not the core Matrix), e.g. choosing a gender for the mother as a function of the gender of the children. Although they would also run into this problem of carrying around the derivation history (unless we move the constraints, as described above).

At the risk of accumulating further technical debt… I could write you a type that would identify the NON-LOCAL LISTs of each COMPS element. But this would add extra structure in a similar way to append-lists. (i.e. either nearby in the structure, which is more readable, or far away, which is less readable but avoids problems like the above.)

I think using deleted-daughters is a more sustainable way to avoid the problem. But I can’t comment on why it works in ACE but not the LKB.

Mockup of proposed syntax, where all lists are really of type list (i.e. there are no append-lists or diff-lists anywhere):

my-unary-phrase := phrase &
  [ SYNSEM.LOCAL.CONT [ RELS #r,
                        HCONS #h,
                        ICONS #i ],
    DTR.SYNSEM.LOCAL.CONT [ RELS #r1,
                            HCONS #h1,
                            ICONS #i1 ],
    C-CONT [ RELS #r2,
             HCONS #h2,
             ICONS #i2 ],
    RELATIONAL.APPEND < [ INPUT: < #r1, #r2 >,
                          OUTPUT: #r ],
                        [ INPUT: < #h1, #h2 >,
                          OUTPUT: #h ],
                        [ INPUT: < #i1, #i2 >,
                          OUTPUT: #i ] > ].

The “input” to each “relational constraint” would still be destructively modified, but this is already true for diff-lists (the LIST of a diff-list contains the local list followed by everything that was appended to it). The important point is that for both diff-lists with manual appending, and the above proposal, you cannot locally see the derivational history in the list.

Interesting – I think the idea of a “RELATIONAL” feature that stores all of this stuff far enough away in the feature structure that it can more easily be stripped out (avoiding the problem encountered here) while also being sufficiently conceptually simple.

And yes, I can also see the application to feature resolution in coordination (which is only part of why we have many coordination rules, but it’s a combinatoric explosion, so even reducing that factor would be nice)…

relational constraints at last :slight_smile:

1 Like

Yeah, a feature RELATIONAL would make our favorite type of HPSG paper title, “X and Y without relational constraints”, somewhat ironic :slight_smile:. Not that it is a reason not to use it.

Sadly, I have to resurrect this thread and recant my earlier heresy. Using deleted-daughters to remove the computation features does indeed destroy the evidence of the computational history, but at an unacceptable cost that I failed to foresee. The trick works well enough for appends of the most vanilla type where the contents of the lists to be appended is already known.

Unfortunately, it does not work for appends where the lists to be appended are not yet specified at the point in the tree where the append lives. This happens in just about every sentence in the form of lexical threading of SLASH values. The append is stipulated on the lexical values, and at that point all we know about the resulting list is that it is list. The magic that computes its value happens down the road when the arguments are realized and their SLASH lists become known, since that is when the list-copy types get specialized to cons-copy or null-copy and whatever other wizardry. If there are grammar rules interposed between the lexeme and the argument realization rule (as is very often the case), the deleted-daughters trick helpfully erases all record of those computation types, so when the argument eventually does get discharged, the computation doesn’t happen and the SLASH value remains list. This in turn opens the grammar up to licensing all manner of reprobate trees.

So, I think using deleted-daughters to solve the problem is not a solution, at least if lexical threading is to be implemented using append-lists.

I remember discussing both the deleted-daughters option and lexical threading in relation to append lists at the summit in Cambridge, but I can’t quite recall whether we predicted and then forgot about this problem or just didn’t connect the dots then (as I failed to last week).

1 Like

Indeed – we did notice it: http://moin.delph-in.net/CambridgeExtractionAppend

So, I think that takes us back to moving the computation out far enough that we’re not trying to identify structures that include it.

@guyemerson – in your mock up above, is the idea that the value of RELATIONAL.APPEND is all of the appends taking place in the local feature structure? That is, that list is always of known length and is defined in either a lexical type or a (lexical or phrase structure) rule?

All – Do you think it would be cleaner to pull such computation out into a “RELATIONAL” feature across the board (e.g. including the RELS and HCONS lists) or only where we need to (NON-LOCAL)?

I’m concerned that the top level RELATIONAL feature approach will fail for basically the same reason (although I would be happy to be shown wrong). The computation machinery for the lexical threading append will be in the RELATIONAL feature at the lexical level of the tree. By the time that edge has undergone a rule or two and is ready to pick up its arguments, the append types will be buried at some path like ARGS FIRST ARGS FIRST RELATIONAL. Since ARGS is on the deleted-daughters list, the (granddaughter’s) append types won’t actually be there anymore on the daughter and the computation won’t happen.

In the original scheme, the computation types were automatically carried up the tree since they were part of the list structure, so they didn’t get lost when ARGS was trimmed.

@sweaglesw Thanks for looking into this further!

This sounds solvable, with the same mocked up syntax that I’ve shown above, but invoking even more machinery behind the scenes…

The RELATIONAL constraints would need to get passed up the tree, to avoid getting deleted. So we would need another feature, say ACCUMULATED-RELATIONAL, which accumulates the RELATIONAL constraints of the daughters (using list appends). The daughters’ constraints get deleted (because they’re inside ARGS) but they’ve already been copied up, and the copies can still be enforced.

@ebender As for whether we want both mechanisms, I think using append-lists allows a nicer syntax, and is more efficient than having ACCUMULATED-RELATIONAL (I would need to implement this to quantify how much). I agree it would be cleaner to have just one way to specify appends, though.

I’m beginning to think that it might be easier to move away from the lexical threading (which itself is difficult to maintain in many other ways and difficult to reason about) than to go down this path. Because the problems we’re seeing are specifically at the intersection of the computation bits getting deleted along with deleted-daughters and lexical threading (i.e. information required for the computation not being accessible locally), right?

I’d like to reconstruct why we want lexical threading. I think @olzama has looked into this some. My recollection is that it has to do with traceless LDDs, but in the Matrix (following the ERG), we have rules that introduce the gaps anyway (extra-subj, extra-comp) and we don’t include adjuncts on any extended DEPS list. @Dan: Can you make any guesses as to where we would see trouble if we treated the NON-LOCAL values of head-subj, head-spr and head-comp the way we do or head-adj?

That’s my understanding of Woodley’s analysis. The computation is only lost if it’s delayed (via underspecification, so that the computation subtypes don’t get triggered). For the use case of identifying big chunks of structure in coordination, moving the computation to RELATIONAL or using deleted-daughters on APPEND should both be fine.

Here’s one related thread, and that’s probably the most focused one; there are more though that talk about lexical threading:

  1. regarding multiple adjunct extraction
  2. resumption in Nuuchanulth
  3. regarding multiple questions
  4. regarding QUE values

(and there were a couple more threads which mentioned lexical threading but looked more tangential.)

On further reflection, I think it might be enough to use the current append-list type, with deleted-daughters including APPEND, but crucially not NEW-LIST and END-LIST.

I know Woodley has pointed out a cyclic re-entrancy caused by NEW-LIST, but I think this could be a bug in the grammar, rather than a fundamental problem.

Thank you Olga for digging up those references. It sounds like the main justification for lexical threading is allowing for lexical material that selects a gappy complement without passing that gap up, such as tough adjectives. This seems like a fairly strong argument.

I am curious however whether the lexical threading account of tough can in principle accommodate sonata-violin sentences where we want tough to discharge one of the gaps but pass up the other(s). At first I thought we would lose the other gap(s) if it just ignored the complement’s SLASH list, but it would seem plausible to say the SLASH of tough is the append of all its arguments excluding the first element of the complement’s SLASH. With append-lists I guess you would need to not append the complement’s SLASH but instead append a newly fabricated append-list, something like [ SLASH.APPEND < #subjslash, append-list & [ LIST #1 ] >, COMPS…SLASH.LIST.REST #1 ].

I’ve worked out how to fix the Meithei grammar, but I will admit that this example shows append-lists are not as intuitive to use as I’d hoped. We can amend basic-one-arg to be like this (and similarly for REL and QUE):

basic-one-arg := lex-item &
  [ ARG-ST < [ NON-LOCAL.SLASH #slash ] >,
    SYNSEM.NON-LOCAL.SLASH.APPEND < #slash > ].

This looks weird, because we’re appending a single list… which will just give the same elements as the original list. If we were just looking at SLASH, these should be equivalent. The problem comes because the coordination rule identifies both NON-LOCAL and VAL. The first element on ARG-ST is also on SUBJ, which creates the second type of cyclic re-entrancy that @sweaglesw pointed out above. To show the problem more directly (without the chain of re-entrancies we have here), both of the following will fail:

append-list & #cycle & [ APPEND < #cycle > ]
list-copy & #cycle & [ NEW-LIST #cycle ]

Stated like this, the cyclic re-entrancy is obvious. However, they come up (through the chain of re-entrancies) because mathematically, any list appended to a null list is the original list. Unfortunately, appending via the list-copy type is destructive. Now, appending using the diff-list type is also destructive, but in a way that avoids a cycle in this specific case. (There are other cases where both types of append will cause a problem – for example, in both cases, we can’t append to a list in two different ways, because the resulting lists are identified).

The patch I’ve given above is necessary to avoid a list-copy being identified with its own NEW-LIST. Without the patch, one SUBJ's SLASH's new list is appended to a null list, which is mathematically equivalent to the other SUBJ's SLASH, but we unfortunately have a cycle. With the patch, both coordinands use an append, so we end up identifying the NEW-LIST of each SUBJ's SLASH, and not the original lists. For the intransitive verb, we’re appending a single null, and for the transitive verb, we’re appending two nulls. As long as we have APPEND in the deleted-daughters config (or equivalently, if the appends are moved out to a RELATIONAL feature at the top of the structure), we’re only identifying the output of each append and not each list of append-lists. (If we don’t put APPEND in deleted-daughters, this creates the first kind of cyclic re-entrancy that @sweaglesw pointed out above. We can’t have NEW-LIST and END-LIST in deleted-daughters, or else the appends are lost completely.)

So in conclusion: list appends with the list-copy type (invoked by the append-list type) are compatible with coordination and lexical threading, but we need to: (1) either put APPEND in deleted-daughters, or move appends out to a RELATIONAL feature; and (2) be consistent in the use of appends. The consistency we need here is using a trivial append for intransitives, because we’re using an append for transitives and ditransitives. I admit that this inconsistency is in code that I wrote! I hadn’t anticipated it could cause a problem.

I agree this seems like a strong argument! Lexical threading for tough adjectives and embedded clauses would be a tough analysis to try to replace.

Your suggestion for sonata-violin sentences sounds like it would work. Having to fabricate a new append-list is a little awkward, but maybe that’s a price worth paying.

On a separate note, I realised a problem with the proposed RELATIONAL feature I proposed above – it seems I overlooked @ebender’s question about whether the RELATIONAL feature would contain all of the appends in the local feature structure. The answer is yes, but I now realise this is problematic: if we have appends defined by different supertypes, there’s no simple way to collect all these appends. So I think it won’t work beyond a very simple toy grammar. I think using deleted-daughters is the right way to go. (It’s not strictly necessary, since you could write a grammar carefully so that only the LISTs are identified and never the APPENDs… but I think the use case here is a pretty compelling example of identifying large chunks of structure that contain append-lists.)

1 Like

I think your patch solves the coordination problem case on display above, but I’m worried it doesn’t solve the general case. What you’ve done is made sure the lists being unified belong to the same generation, so to speak. Identifying VAL means their SUBJ’s SLASHes are identified, and identifying NON-LOCAL means the coordinated projections’ SLASHes are identified. Before your patch, the RHS’s SLASH was sort of a 0’th generation copy of its SUBJ SLASH, and the LHS’s SLASH was a 1st generation copy of its SUBJ SLASH, so unifying them resulted in a cycle (#1 [ NEW-LIST #1 ]). Your patch made them both 1st generation copies. I think in a sufficiently complex grammar there will be cases where you can’t guarantee that the generations are the same. I don’t think the toy grammar is complex enough to parse this (or is it?), but for a thought-experiment, consider:

That’s the cookie that the monkeys slept near and ate.

The SLASH of “ate” will presumably be a 1st generation copy/append of its SUBJ SLASH with its COMPS’ SLASH. That is to say, its SLASH will be SUBJ…SLASH.NEW-LIST.

The SLASH of “slept” will be (with your patch) a 1st generation copy of its SUBJ SLASH. The head-adjunct rule has to do an append to collect the gap on the PP, so the SLASH of “slept near” will be 2nd generation with respect to its SUBJ SLASH. That is to say, its SLASH will be SUBJ…SLASH.NEW-LIST.NEW-LIST.

Unifying those NON-LOCAL.SLASH values will, if I’m not mistaken, cause a cycle again. Maybe I’m confusing myself and made a mistake above though?

I neglected to reply to this before, but I do feel that there is a difference between using deleted-daughters on ARGS vs on APPEND. The former case is supposed to be purely an efficiency measure. It causes feature structures to be much smaller, which makes manipulating them quicker, and it also makes the derivation history invisible, which allows ambiguity packing to happen. But crucially, a valid derivation licensed by the grammar will still unify successfully even if ARGS is not on deleted-daughters (or at least it should! I’m not sure anybody’s tried…) By contrast, we are talking about cases where without deleting APPEND the unifications fail but by deleting it unifications succeed. I guess I feel more or less comfortable waving my hands and saying “it’s a computation thing, not part of the linguistic theory,” but it’s worth noting that you have to wave your hands faster to make this work than you do for ARGS.

I believe there is one caveat here: the number of lists being appended has to be known at the time APPEND is discarded. I’m not even sure how you’d write this, but a hypothetical verb that doesn’t lexically know the length of its ARG-ST but still wants to lexically thread its SLASH using an append-list would then be subject to lost computation when APPEND was deleted before the list-of-alists inside it could link up the NEW-LIST and END-LISTs. This sounds pretty far fetched, but maybe there’s some kind of argument composing verbs in some language that would want to do this – I seem to recall Berthold saying something related to this happens in German, though I wouldn’t begin to be able to construct the example.

I feel like I’ve been coming across as negative at just about every turn, and that’s not my intention. I do think append-lists are pretty cool :slight_smile:

2 Likes

I’ll start with the easier points first.

I agree. I’ve been thinking of this unorthodox use of deleted-daughters as an efficiency measure from the point of view of the grammarian – it means the grammar can be written more succinctly, rather than identifying each LIST individually. (I’m no longer sure this unorthodox efficiency measure will work in the long run… more on that below…)

Yes, with a caveat to the caveat: there could be another type that later introduces the APPEND. In this case, we could introduce a subtype of list, to be used by ARG-ST, which accumulates all of the NON-LOCAL values. Although the APPEND would get deleted at each step, it would get re-introduced at the next step. I considered writing such a type, so that basic-one-arg, basic-two-arg, and basic-three-arg could share a common supertype that does the lexical threading… but I decided it would be near unreadable. Three types isn’t so many.

The bad news would catch up with me eventually! Thanks for clarifying, though! :slight_smile: I think the discussion’s been very productive.

And here is the difficult part… I think you’re right and I can’t see a quick fix. Simplifying your example:

Monkeys sleep deeply and eat.

The ultimate problem is that I originally envisaged wrapper types in a strict input-output paradigm. This was fine for the CONT lists, which just accumulate. But with lexical threading, the mother’s SLASH is a function of the arguments’ SLASHes, and we want to keep the arguments’ SLASHes. Because list-copy is destructive, the arguments’ SLASHes are no longer clean. Diff-lists avoid this particular problem, because the LIST “output” is the same list as the “input”, so there’s no cycle.

There are a few different ways forward. One is to go back to appending diff-lists, which is what I first presented at the Oslo summit. There’s the same functionality as before, but with slightly easier syntax. The downside is that we can’t close the end of a list and still append to it, which is where append-lists should be able to help.

A hacky solution would be to leave the SLASH.APPEND list open, and have each adjunct add its SLASH to this list. This would require introducing a pointer to the end of the list, that gets incremented with each adjunct… This is starting to become quite unwieldy, because the book-keeping needs to be split between different types. This kind of code would probably be difficult to maintain.

A cleaner solution would be to have two copies of each SLASH list, so that we keep one clean, and use the other in appends. So LIST has to be split into three lists: INIT-LIST, which initialises the list and creates two copies, say LIST and PRIVATE-LIST. The former will be kept clean and the latter will be destructively appended to. The output of an append goes into INIT-LIST so we can recursively maintain the parallel lists. This will have two unfortunate consequences. Firstly, the grammarian should always instantiate a list with INIT-LIST and never LIST, or else the appends will break. Secondly, the grammarian should always compare lists using LIST, or else there can be cyclic re-entrancies as we’ve discussed above.

It follows from the second unfortunate consequence that we can’t identify big chunks of structure like NON-LOCAL, because this would include both the clean and dirty lists. We can’t use deleted-daughters to get around this, because the dirty lists are the ones actually doing the appending. This is the coordinating-monkey problem above. So this solution is no longer unorthodox in its use of deleted-daughters, but adds a slight burden on the grammarian to be careful in how append lists can be instantiated and manipulated. The upside is that we can see exactly what is on a list and still append to it.

It looks like I’ll have something to present at the next summit :wink:

Concretely:

list := *top*.
null := list.
cons := list &
  [ FIRST *top*,
    REST list ].

list-wrapper := *top* &
  [ LIST list ].

diff-list := list-wrapper &
  [ LAST list ].

append-list := list-wrapper &
  [ INIT-LIST list-copy & #list & [ NEW-LIST #newlist,
                                    NEW-OPEN-LIST #newopen,
                                    NEW-OPEN-END #newend],
    LIST #newlist,
    DIFF-LIST diff-list & [ LIST #newopen,
                            LAST #newend ],
    APPEND list-of-alists & [ APPEND-RESULT #list ] ].

list-of-alists := list &
  [ APPEND-RESULT list ].

cons-of-alists := list-of-alists & cons &
  [ FIRST append-list & [ DIFF-LIST [ LIST #start,
                                      LAST #middle ] ],
    REST list-of-alists & [ APPEND-RESULT #middle ],
    APPEND-RESULT #start ].

null-of-alists := list-of-alists & null &
  [ APPEND-RESULT null ].

list-copy := list &
  [ NEW-LIST list,
    NEW-OPEN-LIST list,
    NEW-OPEN-END list ].

cons-copy := list-copy & cons &
  [ FIRST #first,
    REST list-copy & [ NEW-LIST #listrest,
                       NEW-OPEN-LIST #openrest,
                       NEW-OPEN-END #end ],
    NEW-LIST [ FIRST #first,
               REST #listrest ],
    NEW-OPEN-LIST [ FIRST #first,
                    REST #openrest ],
    NEW-OPEN-END #end ].

null-copy := list-copy & null &
  [ NEW-LIST null,
    NEW-OPEN-LIST #end,
    NEW-OPEN-END #end ].
1 Like