Modeling the learning of the English past tense with memory-based learning
Modeling the acquisition and final state of English past tense inflection has been an ongoing challenge since the mid-eighties. A number of rule-based and connectionist models have been proposed over the years, but the former usually have no explanation of how the rules are learned and the latter often rely on implausible vocabulary growth and feedback assumptions. We investigate an approach that is able to address these criticisms, a memory-based learning model that is based on analogy. We were able to model the learning of the English past tense well compared to previous systems. However, a more detailed analysis showed a number of results inconsistent with child language data, including the generation of incorrect irregular forms (e.g. think-thank instead of thought) and the inability of the system to produce regularized forms with irregular roots (e.g. broked). We discuss how the model could be modified to accommodate this additional child production data, though note that these modifications would also detract from the simplicity, and thus plausibility, of the approach.