Such limitations are not unique to humans, but present in all physical computational systems.ĭue essentially to these limits on recursion depth, formal notions of recursion are notoriously difficult to connect to behavior. In this context, it is important to remember that at least in language, humans themselves are incapable of more than two levels of center-embedding (Gibson & Thomas 1999). Lakretz suggests that it is crucial to test deeper recursion depths than the training, and we note that no matter what amount of training is provided, a critic could always construct an ad hoc architecture to process it, and claim that deeper nesting is required for "true" recursion. We also agree that success on the task does not mean that a learning model has acquired a grammar that can generate arbitrarily deep embedding. These results emphasize that success on this task is interesting and informative because it rules out a variety of plausible learning architectures. In our own modeling, we have found that simple recurrent neural networks learn ordinal knowledge but not the required hierarchical generalizations. We certainly agree that computational models can handle stimuli like ours when they have the necessary architectural constraints built in (such as the specific LTST model Lakrets suggests). Lakretz raises several interesting perspectives on data we recently reported in Ferrigno et al (2020).įirst, he argues that "non-recursive mechanisms" may explain the data we observed.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |