Grammar-based → Symbolic Semantic Parsing
- Dynamic Syntax (DS) Inherently incremental but no wide-coverage parser [?] Kempson et al. (2001). "Dynamic Syntax: The Flow of Language Understanding" - Blackwell Publishing
- Combinatory Categorial Grammar (CCG) Not inherently incremental but has wide-coverage & incremental parsers [?] Steedman. (1996). "Surface Structure and Interpretation" - MIT Press
- PL-TAG, RMRS-IP, HPSG Unfortunately discontinued. [?] Konstas & Keller, (2015). "Semantic role labeling improves incremental parsing" - ACL-IJCNLP [?] Hough et al. (2015). "Incremental semantics for dialogue processing: Requirements, and a comparison of two approaches" - IWCS [?] Ginzberg et al. (2017). "Incrementality and HPSG: Why Not?" - pre-print
E2E Neural → [Large] Language Models
- Inherently incremental architectures
- RNNs & LSTMs Not SOTA [?] Hochreiter & Schmidhuber(1997). "Long short-term memory" - Neural Computation
- Auto-regressive models (GPT-2, etc.) Don't have the best incremental performance compared to bi-directional models. [?] Madureira et al. (2024). "When Only Time Will Tell: Interpreting How Transformers Process Local Ambiguities Through the Lens of Restart-Incrementality" - ACL 2024
- Mamba, xLSTM, RWKV These are quite new architectures and are not evaluated in an incremental setting yet. [?] Gu & Dao (2024). "Mamba: Linear-Time Sequence Modelling with Selective State Spaces" - ICLR [?] Beck et al. (2024). "xLSTM: Extended Long Short-Term Memory" - NeurIPS [?] Peng et al. (2023). "RWKV: Reinventing RNNs for the Transformer Era" - EMNLP
- Bidirectional models adapted incrementally Restart-incremental is not the only way to adapt non-incremental models in an incremental fashion, but it is the most effective one. See references for more. [?] Madureira & Schlangen (2020). "Incremental Processing in the Age of Non-Incremental Encoders: An Empirical Assessment of Bidirectional Models for Incremental NLU" - EMNLP 2020
- Transformers + restart incremental Better incremental performance than ARs, but high computational cost. So no perfectly incremental neural model found yet. [?] Madureira et al. (2024). "When Only Time Will Tell: Interpreting How Transformers Process Local Ambiguities Through the Lens of Restart-Incrementality" - ACL 2024