Lexical threading

What is a good way of summarizing why lexical threading is needed? In Bouma et al., I think its role is to eliminate subject/complement extraction rules. But in the Grammar Matrix we still use subject and complement extraction. I am assuming we still get some streamlining out of it, but I cannot quite put my finger on it.

Related to this, what’s a good way of explaining the type gap: its local features are the sole item on its SLASH list; what’s a good way of summarizing how this later fits into the mechanics of extraction?

I’m afraid I don’t know the answer to the first question — I’m curious what @Dan’s take is!

As for the second one: I believe it has to do with the traceless analysis of long-distance dependencies. That is, the bottom of the long-distance dependency has to introduce the information that something is missing. In earlier HPSG analyses, that involved an actual trace in the phrase structure. Giving the gap a non-empty SLASH list, together with lexical threading, gets this information in, without having any traces.

1 Like

I think your answer to the second question at least partly answers the first one as well :).

The main motivation for lexical threading of SLASH is to enable a better analysis of lexical entries that select for a complement containing a gap, including adjectives such as “easy” (Kim is easy to talk to) and degree specifiers such as “too” (That box is too heavy to carry). Without lexical threading, SLASH values have to be passed up from all daughters in a phrase, so in pre-threading HPSG days, a special and awkward"TO-BIND" mechanism had to be added to the formalism to stop the gap in the VP complement of “easy” from being passed up the tree. With lexical threading, the SLASH of the mother in all headed phrases is simply identified with that of the head daughter, and it is up to the lexical head to say whether any SLASH values of its complements (or subject or specifier) should also be passed upward; most entries do, but not instances of the lexical types for “easy” and “too”, among others.



Thank you, Dan!

I think it will make sense to ask this additional question in this same topic: And what made it undesireable/impossible to actually get rid of the extraction rules in the ERG, like Bouma et al. and Ginzburg and Sag did in their theoretical accounts? I know I have asked this before, more than once, but I don’t recall the answer and cannot find where I could write it down… Adding it here should really help, I think.

Something to do with the argument/adjunct distinction perhaps (having to be able to count things?). Bouma et al. 2001 use DEPS instead of ARGS and MOD, and that I think allows them to get rid of extraction, if I remember right.

Chiming in to say I’m fairly sure the reason that Bouma et al can’t be directly implemented is that the DEPS list of unbounded/indeterminate length, which isn’t feasible in tdl…

1 Like

Is there any paper that documents it with minimal examples?

The easy-adjectives? Maybe look in Sag, Wasow, Bender (e.g. Chapter 14). Or which aspect are you asking about?

To many things that I was not able to find in Wet, Wasow and Bender: lexical threading, lexical threading of SLASH, SLASH and the TO-BIND mechanism. But I am reading now Chapter 14 to understand GAP…

Hmm well, you could try my dissertation, the Background and Previous Work chapters?.. E.g. sections 4.1.1–4.1.2? I tried to explain lexical threading there, for non-specialist audience…