Do/should the number of trees in an fftb forest correspond to the number of parses found by ACE

I noticed that the number of trees in the fftb forrest for some of my test items (for inferred grammars with lots of ambiguity) doesn’t necessarily match the number of parses reported by the parse file when I processed a profile with ACE. The difference is 1908 from ACE vs 2448 in fftb. Should these numbers be the same? If not, what’s the difference?

Hi Kristin,

The numbers are not expected to match. They are both kind of supposed to measure the same thing, but the number given by FFTB is an estimate produced without actually enumerating the readings and seeing which ones are valid. It corresponds to the number of ways of unpacking the forest to form trees consistent with what was found during the packing phase of parsing. Some of those can prove to be inconsistent (i.e. imply failed unifications) upon closer inspection. This happens, for instance, when a more specific edge packs into a more general edge, which then undergoes a rule which is compatible with the more general edge but not the more specific one. If that sounds like gobbledygook I can try to find a more concrete example…

Woodley

Thanks, Woodley. I don’t understand perfectly, but well enough for my purposes :slight_smile:

Is it possible to construction a minimal example to make more clear what is packing, specific vs general edge? Maybe a pointer to the best literature about it?

1 Like

Abstractly, if you think of parsing as search for derivations licensed by the grammar whose yield is a particular string, then packing is the situation where you can save work by recognizing that two or more parts of the yet-to-be-explored portion of the search space are either equivalent or close enough to equivalent that exploring only one of them suffices.

The search algorithm used by ACE, LKB, and PET is basically the same: find all the analyses of a string by first finding all the analyses of each substring, building bigger and bigger analyses using the grammar rules until all combinations have been exhausted. An analysis of a (sub)string is called an “edge.” Each edge is built on some number of shorter edges and possibly gives rise to multiple longer edges. In this context, packing is the situation where two edges spanning the same portion of the input string are built in distinct ways (e.g. analyses of an NP with different internal PP attachment choices) but present the same combinatorial possibilities with respect to the rest of the input. When that happens, we “pack” one of the edges and proceed to build bigger analyses only using the other one.

As an example, consider a sentence like “The cat in the hat in the hat in the hat slept.” At the end of the day, it is going to receive an analysis along the lines of (S (NP VP)). There are lots of possible structures for the NP (12 of them in the ERG), but the internal structure of the NP makes no difference to what happens to it next. All 12 of those can be represented by a single edge when it comes time to explore how to combine the NP with the remaining words in the sentence. In this example there is only one step remaining, but if the rest of the sentence also had ambiguity, then we are looking at the difference between combining that ambiguity with the NP 12 times vs. just once.

One way to be sure that the combinatorial potential of two edges is identical is when their feature structures are identical. Unfortunately, given the detailed nature of HPSG feature structures and the degree to which they (intentionally or not) capture derivation history, this “complete equivalence” basically doesn’t happen. To salvage the efficiency gains from merging the search space (parsing a sentence of any substantial length with the ERG is essentially intractable without it), practical parsers sometimes pack even when they can’t prove that the combinatorial potential of the two edges is identical – or even when they know they are not identical. Careful bookkeeping makes it possible to reconstruct (unpack) a conservative bound on the combinatorics after exploring the full search space with packing is complete.

That conservative bound is what FFTB is showing as the number of remaining trees. When the combinatorial potential of the packed edge is actually less than that of the edge taken to represent it, the unpacking process finds spurious trees that can only be ruled out through enumeration (although some pruning is possible). If on the other hand the combinatorial potential of the packed edge is actually greater than that of the edge taken to represent it, correct trees have been overlooked, and this situation must be carefully avoided by the packing algorithm.

I’m not sure what you mean by “specific vs. general edge”, but at a guess I think you may be referring to the “generalization packing” algorithm employed by ACE. This is a further step in the direction of reducing the search space. In the situation where the combinatorial potential of edges X and Y does not stand in a strict subset relationship, it is impermissible to pack X, leaving Y as representative, and it is also impermissible to pack Y, leaving X as representative. In certain limited situations of this type, ACE creates a new edge G whose combinatorial potential is a superset of both X’s and Y’s (implemented by creating a feature structure which subsumes both X’s and Y’s feature structures, and then packing both X and Y, leaving G as the representative. This often gives substantial speedups, but at the cost of increased unpacking failures – a good tradeoff for automatically parsing large amounts of text, but a bad tradeoff when you want to manually treebank.

3 Likes

Woodley gives a very clear overview and justification of the approach. If you’re interested in further background, and algorithmic and experimental details, I suggest this paper:

@inproceedings{oepen-carroll-2000-ambiguity,
title = “Ambiguity Packing in Constraint-based Parsing – Practical Results”,
author = “Oepen, Stephan and Carroll, John”,
booktitle = “1st Meeting of the North American Chapter of the Association for Computational Linguistics”,
year = “2000”,
url = “https://aclanthology.org/A00-2022”,
}

3 Likes

Thank you very much for the didactic and detailed explanation @sweaglesw! Thank you @johnca for the reference! I am happy that I asked… I have a much better understanding now.