Here’s a naive question. I’m a computational social scientist interested in using automated methods to analyze misinformation online. I have in mind using semantic parsing to split sentences into more basic elements of meaning and to understand the relationships between those elements. I’ve looked into PropBank, AMR, and now MRS / ERS. There’s a bit of a learning curve with MRS / ERS so I’m wondering if someone has any thoughts on whether MRS / ERS might or might not be useful for my purposes and how it stacks up relative to AMR?
AMR seems more complete in identifying semantic arguments than PropBank, but I have a number of reservations: It drops case and other syntax; it has only an English language version (OK for now, but perhaps not ideal in the longer-run); I’m not sure the roughly 60K sentences in AMR’s dataset will be enough. I have in mind training a deep learning model to parse sentences using AMR’s data and then using the model on arbitrary web text. Given the number of possible verbs / verb senses and their often fairly unique argument structure, I am concerned that training deep learning on AMR’s annotated dataset will not generalize well to arbitrary text.
I know that MRS is available for the 2008 Wikipedia, though I gather only some fraction of that is ‘gold’ annotated (still, the deep learning model could learn from less than gold and then be trained on gold).
Any thoughts on whether I’m barking up the right tree in considering ERS / MRS?
Cheers, Peter