Error in processing profiles using art for treebanking

Hi! I am using art and ACE to process a profile I wish to treebank.

~/grammar/INDRA$ art -f -a 'ace --disable-generalization -g ind.dat -O' /tmp/(name of testsuite)-demo/

but I got this error message for sentence 1399 and art cannot proceed to the next sentences:

reading results for             1398	0 results
reading results for             1399	out of sync; arbiter sent 'ERROR: DEADLY SIGNAL! sent = `Saya memiliki penerbangan London jakarta via Istanbul dengan kode booking CODE'' when expecting a SENT or SKIP header.
failed to read result for 1399

I checked that sentence in the item file:

1399@tt@@@1@@Saya memiliki penerbangan London jakarta via Istanbul dengan kode booking CODE@@@@1@11@@david@2018-12-31

I think the formatting is okay and the grammar (INDRA) can parse the sentence.
Could someone please tell me why I got the error message “out of sync”?

Hi David,

The error you are getting indicates that ACE crashed while trying to parse that input. With the current GIT copy of INDRA I got a lexical gap when trying to parse it, but succeeded when changing Istanbul to Jakarta. My guess is there is some kind of crash caused by conditions set up by earlier sentences. I would be happy to look for the problem if you send me the profile in question.

Best regards,

Hi Woodley and David,

I actually encounter this problem quite often when processing many (thousands) sentences consecutively, using various ERG versions, just parsing or generating. I don’t think it has anything to do with the sentences themselves because they process fine at other times. It seems to me like it is triggered purely by the volume of data, which might suggest some memory leak.

All the best,


Thank you, Woodley and Ewa!

Woodley, I am sorry, I haven’t updated the INDRA repository in GitHub, also I cannot share the profile at present.
I divided the two thousand sentences in the profile into two profiles, each has one thousand sentence, and run the art again. All sentences were successfully processed and I haven’t got any error messages. So, maybe it’s because of the volume of data (some memory leak), as Ewa said.

I got a similar error message here:

leme:art-0.1.9 ar$./mkprof -r ~/hpsg/logon/lingo/erg/tsdb/gold/mrs/relations -i repsol.txt repsol
leme:art-0.1.9 ar$ ./art -a "~/hpsg/ace/ace -g ~/hpsg/ace/erg.dat -n 5 --timeout=60 --max-words=200 --max-chart-megabytes=4000 --max-unpack-megabytes=5000 --rooted-derivations --udx --disable-generalization" repsol
reading results for             4658	out of sync; arbiter sent 'UNIFY: failed additional constraints from glb-type `*ocons*'' when expecting a SENT or SKIP header.
failed to read result for 4658

In my case, I am trying to create a profile with 5,620 sentences. I got the error after some hours, it was almost finishing the process! :frowning:

Some questions related to the solution from @davidmoeljadi:

  1. instead of split a profile, can we process only part of the profile?
  2. My sentences were taken from different documents, can we somehow have more information on the item file to indicate the origin (the document) of the sentence?
  3. Once the profile is complete, what is the easier and best way to check and improve the parse selection model? I assume we have a tool to inspect the trees and select the best one. How?
  4. In the mkprof above, I ended up using the logon/lingo/erg/tsdb/gold/mrs/relations file because of the mention of this directory in the pydelphin documentation in . But there are many more profiles in logon/lingo/erg/tsdb/gold and logon/lingo/lkb/src/tsdb/skeletons, what is the difference between them? What would be the impact of choosing other relations file?

The memory leak argument makes me think. Does the art process fire only one ace process for all items? The error message I got and @davidmoeljadi error message mention arbiter which is supposed to be the job controller making the communication between art and ace, so we could better control this communication splitting the processing between more than one ace process, right? Unfortunately, I could not go further without obtaining the arbiter code, the page is not very helpful! :wink:

@sweaglesw, any comment?

I would still like to be able to debug some of these crashes. My guess is that, like David and Ewa’s experiences, the sentence that crashed Alexandre’s run also would process fine in isolation. The error message in this case indicates that somehow ACE got into a mode where it thought it was compiling a grammar instead of parsing sentences, so most likely a pointer got corrupted somewhere. I would be happy to try running the 5620 item profile on my end and see if I can reproduce the problem and hopefully track it down.

art has two options for processing partial profiles, both based on a list of item ID numbers. You can either invoke it with -I "5, 6, 83, 145" or with -F iid-list.txt where iid-list.txt contains one item ID per line.

There is a string-valued i-origin field in the item file (2nd field in the current schema) which you could probably use for this purpose. art will ignore it. You would have to provide your own mechanism to get it into the item file though, since mkprof has no such facility (although it will preserve it when used to copy a new profile from a skeleton).

Your command-line indicates you are recording the top 5 trees. If your intent is to manually treebank these items to produce training data for improving parse selection in your domain, I would recommend either recording a full forest and using FFTB for treebanking or recording substantially more than 5 trees (500 used to be a common number) and using the older [incr tsdb()] treebanking tool. The wiki has some information on both methods (e.g. or or

The relations file defines the schema of the testsuite database (aka profile). All of the profiles you mentioned should have identical relations files. You might occasionally run into a profile with an older or newer schema, in which case I would advise always using the newest one. When a new schema is declared (Stephan Oepen is the arbiter of such decisions) it is generally to make room for storing some new piece of information, so unless you are specifically needing that new piece of information, you won’t see any difference.

When using art -a "ace ...", only one ace process is spawned, and all of the items to process are funneled through it. I still have not done anything about making arbiter a publicly supported piece of software. However, both arbiter and the liba that it depends on are available through SVN if you feel like poking around: and

The basic mechanism for using arbiter is as follows: run arbiter in one terminal, possibly with -d which prevents it from forking into the background. In another terminal, spawn one or more ace instances connected to the arbiter, like this: ace -g erg.dat ... other options ... -m localhost where localhost is the machine arbiter is running on. Then, invoke art like this: art localhost my-profile where again you can run on a different host if you want. I should mention that when running with arbiter, parsing memory limits are controlled by arbiter and the memory limits given to ACE on the command-line are ignored. This is not easily configurable at the moment, although please see ram.c. You can monitor the status of arbiter by pointing your web browser at http://localhost:8880/ although not all of the numbers are useful or accurate. http://localhost:8880/schedule and http://localhost:8880/tasks are also useful. These web portals will not work unless arbiter is launched with its own source tree as the working directory.

The idea that using arbiter might improve control over the problems David and Ewa and Alexandre have been experiencing is partly true. When ace crashes, arbiter will reschedule the job onto another ace (or if it has run out of workers, wait until another ace connects; arbiter does not spawn new workers). It doesn’t solve the underlying problem that ace is still doing something corrupt (which might possibly affect results even for items before the crash happens, although it also might not). All of the crashing is hidden from art, in any case. I have used arbiter and art to parse collections of tens of millions of sentences before (split into smaller profiles of several thousand). In such cases I have also occasionally run into crashes like the ones you discuss, and sometimes have been able to track them down and fix them, though not always.

I hope some of the above are of interest or use to some of you. As always I am happy to try to offer support if you run into more trouble.

These crashes are why I started doing profile processing with PyDelphin—it was easier for me to detect the crashed process in Python and restart it. However I recall Woodley saying it should be possible to put in a shim in the -a option of art that could accomplish this, too. E.g., art -a ' ...'. @sweaglesw, is that still true?

Compiling and getting arbiter to work may be a better long-term solution, but if arbiter does not start new workers, would a profile with an item that reliably crashes ACE cause a cascading failure across all workers as arbiter reschedules it?

Maybe we could use an wiki? Neither of these options appear on art -h or the art webpage.

Hi @sweaglesw, I don’t know how to attach a file here, but I copied to Dropbox. The link is below, let me know when I can remove it from Dropbox.

You will see that this file has 5,602 lines, I found some sentence segmentation errors in the last version.

Hi @arademaker, thanks – I got the file. In your original post your error message referenced item 4658, but you have now changed the numbering by resegmenting sentences. Do you have a way of telling what sentence was 4658 in that prior profile?

Also, it would be helpful to know what version of ACE and what version of ERG you observed the crash with.

It should still be possible to wrap ACE to make it more robust, yes, although I have no insights about how easy or hard it would be to use this strategy to avoid any particular type of crash.

If a sentence reliably crashes ACE, then yes it is possible to have a cascading kill effect when using arbiter. These crashes are the usually easiest to track down though.

Hi @sweaglesw,

After 17 hours processing the profile again, I got the error:

$ ./mkprof -r ~/hpsg/logon/lingo/erg/tsdb/gold/mrs/relations -i sentences.txt test
$ time ./art -a "~/hpsg/ace/ace -g ~/hpsg/ace/erg.dat -n 500 --timeout=60 --max-words=200 --max-chart-megabytes=4000 --max-unpack-megabytes=5000 --rooted-derivations --udx --disable-generalization" test
reading results for             4676	5 results
reading results for             4677	5 results
reading results for             4678	out of sync; arbiter sent 'ERROR: DEADLY SIGNAL! sent = `At this point, within the thrust belt, the axial portion of the arch is part of a large, thrust-faulted northwest-southeast-trending productive structural feature, about 40 mi long and 18 mi wide, locally termed the LaBarge Anticline.'' when expecting a SENT or SKIP header.
failed to read result for 4678
real	1038m24.210s
user	0m2.598s
sys	0m4.068s

:disappointed_relieved: Another error with art+ace trying to prepare a profile to use It looks like Art didn’t understand the Ace message. Did I use the parameter --itsdb-forest right?

~/hpsg/art-0.1.9/art -a "~/hpsg/ace/ace -g ~/hpsg/ace/erg.dat --itsdb-forest --timeout=60 --max-words=200 --max-chart-megabytes=4000 --max-unpack-megabytes=5000 --rooted-derivations --udx --disable-generalization" test
reading results for                1	28 results
reading results for                2	352 results
reading results for                3	1160 results
reading results for                4	3093 results
reading results for                5	1112 results
reading results for                6	malformed result; no `] ;  (' separator
offending result:
[ LTOP: h0 INDEX: e2 [ e SF: prop TENSE: pres MOOD: indicative PROG: - PERF: - ] RELS: < [ _the_q<0:3> LBL: h4 ARG0: x3 [ x PERS: 3 ] RSTR: h5 BODY: h6 ]  [ udef_q<4:11> LBL: h7 ARG0: x8 RSTR: h9 BODY: h10 ]  [ generic_entity<4:11> LBL: h11 ARG0: x8 ]  [ _clean_a_of<4:11> LBL: h11 ARG0: e12 [ e SF: prop TENSE: untensed MOOD: indicative PROG: - PERF: - ] ARG1: x8 ]  [ comp<4:11> LBL: h11 ARG0: e13 [ e SF: prop TENSE: untensed MOOD: indicative PROG: - PERF: - ] ARG1: e12 ARG2: u14 ]  [ udef_q<12:70> LBL: h15 ARG0: x16 [ x PERS: 3 NUM: sg ] RSTR: h17 BODY: h18 ]  [ _and_c<12:15> LBL: h19 ARG0: x3 L-INDEX: x8 R-INDEX: x16 ]  [ _light_a_1<16:23> LBL: h20 ARG0: e21 [ e SF: prop TENSE: untensed MOOD: indicative PROG: - PERF: - ] ARG1: x16 ]  [ comp<16:23> LBL: h20 ARG0: e22 [ e SF: prop TENSE: untensed MOOD: indicative PROG: - PERF: - ] ARG1: e21 ARG2: u23 ]  [ compound<24:38> LBL: h20 ARG0: e24 [ e SF: prop TENSE: untensed MOOD: indicative PROG: - PERF: - ] ARG1: x16 ARG2: x25 ]  [ udef_q<24:31> LBL: h26 ARG0: x25 RSTR: h27 BODY: h28 ]  [ _density_n_1<24:31> LBL: h29 ARG0: x25 ]  [ _liquid_n_1<32:38> LBL: h20 ARG0: x16 ]  [ udef_q<39:42> LBL: h30 ARG0: x31 [ x PERS: 3 NUM: sg GEND: n IND: - ] RSTR: h32 BODY: h33 ]  [ _mud_n_1<39:42> LBL: h34 ARG0: x31 ]  [ _travel_v_1<43:50> LBL: h20 ARG0: e35 [ e SF: prop TENSE: pres MOOD: indicative PROG: - PERF: - ] ARG1: x31 ]  [ _up_p<51:53> LBL: h20 ARG0: e36 [ e SF: prop TENSE: untensed MOOD: indicative PROG: - PERF: - ] ARG1: e35 ARG2: x16 ]  [ _through_p<54:61> LBL: h20 ARG0: e37 [ e SF: prop TENSE: untensed MOOD: indicative PROG: - PERF: - ] ARG1: e36 ARG2: x38 [ x PERS: 3 NUM: sg ] ]  [ _a_q<62:63> LBL: h39 ARG0: x38 RSTR: h40 BODY: h41 ]  [ _vortex/NN_u_unknown<64:70> LBL: h42 ARG0: x38 ]  [ _in_p<71:73> LBL: h19 ARG0: e43 [ e SF: prop TENSE: untensed MOOD: indicative PROG: - PERF: - ] ARG1: x3 ARG2: x44 [ x PERS: 3 NUM: sg IND: + ] ]  [ _the_q<74:77> LBL: h45 ARG0: x44 RSTR: h46 BODY: h47 ]  [ _center_n_of<78:84> LBL: h48 ARG0: x44 ARG1: x49 [ x PERS: 3 NUM: sg ] ]  [ _theNOTE: cancelling parsing, taking too long
failed to read result for 6

I created the profile with ./mkprof -r ~/hpsg/logon/lingo/erg/tsdb/gold/mrs/relations -i test.sent test.

The test.sent file has seven sentences:

A hydrocyclone device that removes large drill solids from the whole mud system.
The desander should be located downstream of the shale shakers and degassers, but before the desilters or mud cleaners.
A volume of mud is pumped into the wide upper section of the hydrocylone at an angle roughly tangent to its circumference.
As the mud flows around and gradually down the inside of the cone shape, solids are separated from the liquid by centrifugal forces.
The solids continue around and down until they exit the bottom of the hydrocyclone (along with small amounts of liquid) and are discarded.
The cleaner and lighter density liquid mud travels up through a vortex in the center of the hydrocyclone, exits through piping at the top of the hydrocyclone and is then routed to the mud tanks and the next mud-cleaning device, usually a desilter.
Various size desander and desilter cones are functionally identical, with the size of the cone determining the size of particles the device removes from the mud system.

Should I understand that full-forest treebanking is not a stable approach yet? This is so sad, I was really convinced by @sweaglesw master thesis.

I have also tried to remove the two last long sentences and produce a profile with only 5 sentences. Moreover, since I was trying to use the compile version of fftb to MacOS, I compiled the ERG (trunk) with ACE 0.9.19. The commands were:

~/hpsg/art-0.1.9/mkprof -r ~/hpsg/logon/lingo/erg/tsdb/gold/mrs/relations -i test-ff.sent test-ff
~/hpsg/art-0.1.9/art -a "~/hpsg/ace-0.9.19/ace -g ~/hpsg/ace-0.9.19/erg.dat --itsdb-forest --timeout=60 --max-chart-megabytes=4000 --max-unpack-megabytes=5000 --disable-generalization" test-ff

In the FFTB folder, I called it with ./fftb -g ~/hpsg/ace-0.9.19/erg.dat --browser ~/work/slb-gloss/test-ff. No error messages, but it doesn’t mattter what sentence I try to open I always get back the message:

404 no stored forest found for this item


@arademaker, the --itsdb-forest option doesn’t do anything for you here, and won’t get you an FFTB-compatible profile. You might be able to get an FFTB-compatible profile from pydelphin with that option. With art’ what you want instead is-f’ on the art’ commandline and-O’ on the `ace’ commandline. In the invocation you used, you didn’t set any limit on the number of results to output, and also didn’t ask for a forest (at least in the way ace expected), so what happened instead is you started getting thousands of results.

The error you are seeing appears to be a race condition between getting the MRS out the door and the timeout. Is it reproducible? The conditions to trigger it were likely exacerbated by the large quantity of results being printed. It will go on my bug-kill list though!

Not yet (see issue #201). I might get a chance to work on this when I’m back from vacation.