When comparing the results of different runs of MOM+AGGREGATION, I find that there is a small amount of variation. When run on the abz data, I get a slightly varied number for the average number of readings per sentence. I have gotten numbers such as 2177.87, 2178.01, 2178.04, and 2201.24.
I was wondering if anyone knows why something like this might occur, even though it’s being run on the same data every time.
I actually remember this happening when we were writing the Abui paper back in 2017. Unfortunately there isn’t anything in the paper itself, but I vaguely remember putting in a note in the repro repository/script. Except I cannot access Gitlab; is it broken? But it would also be on SVN also. The paper has a slightly broken link which can be recovered by removing a space, and it ultimately points here: https://sites.google.com/site/computel2017abui
Actually, the site says: “The results may vary slightly from the paper due to some randomness in tie-breaking.” – sorry there isn’t more detail there. I think by tie-breaking I meant something like which position class to merge with which other one, given an equal overlap.