Abstract—Nowadays, Linked Open Data is spreading year by year, and its further utilization is expected. Because of the large size of data, Linked Open Data is attempted to learn by using neural networks. Since the data is still scaling in various region, unlike the existing neural network specialized to learn only one region, a neural network which can continuously learn wide region of knowledge is needed. However, neural network is known in its problem, catastrophic forgetting, which is to lose previously acquired skills when learning a new skill. Though existing researches said enhancing modularity can overcome this problem since it can reduce interference between tasks, those researches consider the number of learning tasks is given in advance, and it is not applicable for continuous learning. In this paper, we propose a design approach of neural network reducing modularity expecting that unspecialization can mitigate catastrophic forgetting for continuous learning. Our results show that, although, as we can expect, a neural network with high modularity can mitigate forgetting for tasks learned just before because of the low interference, a neural network with low modularity is better for the worst case when evaluating for all the tasks it learned in the past.
Index Terms—Neural network topology design, catastrophic forgetting, linked open data, modularity.
Lu Chen and Masayuki Murata are with Osaka University, Suita, Osaka, Japan (e-mail: l-chen@ist.osaka-u.ac.jp, murata@ist.osaka-u.ac.jp).
[PDF]
Cite:Lu Chen and Masayuki Murata, "Alleviating Catastrophic Forgetting with Modularity for Continuously Learning Linked Open Data," International Journal of Computer Theory and Engineering vol. 10, no. 2, pp. 38-45, 2018.