I agree, and I believe that almost all DELPH-IN participants also agree with that. What I am looking for is pieces of evidence, experiments, and results that can support this claim. As I said in the summit, people often agree that LLM is insufficient for language study. But for NLP applications, the question is how many tasks (tokenization, sentence segmentation, lemmatization, PoS tag, parsing, NER, WSD, etc) from the traditional NLP pipelines are still relevant.
Let me make it concrete. In the LREC 2016, I attended a workshop about ERG semantics. Examples from the slides advertise the capability of ERG to capture predicate-argument structures among distant dependents and in unusual word orders. By asking chatGPT, we can almost always have good answers. Of course, a careful analysis of the QA below will highlight some potential inconsistencies.
"Most house cats are easy for dogs to chase." who chased whom?
In the given statement, dogs are chasing the house cats.
"Which book did the guy who left give to his neighbor?" who left the book?
The guy who left gave the book to his neighbor.
In "Which book did the guy who left give to his neighbor?", who gave the book to whom?
In the given sentence, the guy who left gave the book to his neighbor.
In "Which book did the guy who left give to his neighbor?", who own the book?
In the given sentence, it is not specified who owns the book.
In "Which book did the guy who left give to his neighbor?", what is the book about?
In the given sentence, the specific book or its content is not mentioned.
In "Which book did the guy who left give to his neighbor?", who was the owner of the book?
In the given sentence, the owner of the book is the guy who left.
In "Which book did the guy who left give to his neighbor?", who owned the book?
In the given sentence, it is not specified who owned the book.
In "We know that the cat didnât sleep." who knows what?
In the given statement, it is not specified who knows that the cat didn't sleep.
In "We know that the cat didnât sleep." the person who wrote the statement knows what?
In the given statement, the person who wrote the statement knows that the cat didn't sleep.
But the predicate-argument structure is only part of the semantics⌠Sure, I donât buy the LLM-centered NLP. I am just trying to find references to support that: generative language models are just one tool but do not replace all the others.
I hope this discussion is not too dull.