Multiple wh-questions and obligatory single fronting

I am modeling obligatory single fronting right now, so:

(1) What does the cat see where?
(1) *Where what does the cat see?

(NB: I am modeling a pseudolanguage, so there is no auxiliary and inversion in my test suite, just who the cat sees where?).

What I had so far, developing off 567 instructions, is basically wh-words which all introduce a non-empty QUE value and phrase structure rules which make sure those QUE values are percolated, so, e.g. the head-comp rule says the head’s complement’s QUE value is its own QUE value; then most phrase-structure rules inherit from head-nexus-phrase which identify the nonlocal constraints between the mother and the head-daughter.

Then, the head-filler rule (the wh-phrase), the way I have it now, says the mother’s QUE value is empty. The root also says its QUE value is empty.

This works when we have at most one wh-word, but with multiple, I may need to make some changes.

Right now my only “problem” is that I am licensing Where what constructions (such as Where what the cat sees?, in my no-auxiliary pseudolanguage). But I don’t have embedded questions in this test suite yet, and I suspect that might bring similar issues to light.

32%20PM

How should I approach this?

Perhaps a (CAT?) feature saying, basically, “a head filler rule had applied”? And then extraction rules insisting that that bit is not set? However this probably won’t work for What do you know whether the cat saw?

I’ve been trying to think of a way of doing this using just QUE and SLASH but I couldn’t for now. I could try to say that the head-filler rule actually doesn’t zero-out the QUE list and in that case the QUE of something like what the cat sees could be non-empty, and then I could say: adjunct extraction phrase cannot have such a head-daughter. But that doesn’t seem to make much sense because then I have to say: the root can have a non-empty QUE, and then I start licensing things like the cat saw what which I am trying to rule out (for this pseudolanguage).

In this particular case, I could force adjunct extraction to only happen at the VP level but this doesn’t sound sound :). Does it?

(NB1: Of course in many languages, fronting is optional, but right now I am operating under the assumption that it can be strictly obligatory; perhaps I should revisit that).

(NB2: I am reading Ginzburg and Sag right now, and they don’t seem to be using QUE, they have a feature called WH which is only non-empty for fronted words (somehow). They also have Nonlocal Feature Principle, which is that the mother’s nonlocal constraints are the union of all daughters’… So that’s a bit different although it may well be relevant; still working on loading it into my head.)

This is tricky! I think your intuition about using a CAT feature makes sense, though. Basically, I think we want head-filler to be the last thing to happen to a clause. I wonder if you can reuse MC for this purpose?

Hmm… You know what, this might actually work beautifully. Here’s what I tried, and while it is possible that I will encounter problems with this, for now I have 100% coverage and 0% overgeneration on that same test suite (so, where what the cat sees is beautifully gone, without any obvious regressions).

So, if there is a non-empty QUE somewhere, then MC stays negative until a head-filler rule applies. In other words, MC is introduced by wh-words themselves. And then the extraction rules insist that their head daughter is [MC - ].

What’s also nice about this is I was already using MC to not allow unextracted wh-adjuncts in sentences with just one wh-word (The cat sleeps where).

I will also keep reading G&S of course. They do have an analysis of multiple questions, so, I need to understand it anyway… It’s just it is not directly applicable to the GM :).

Sounds reasonable, but keep in mind that other libraries use MC, too. It would be worth constructing regression tests that look at the interaction of those properties, if they are something that could sensibly come up.

1 Like