Wile deep neural networks are a common machine learning technique used to train models to complete tasks like language translation and image classification, they have some drawbacks, according to the study authors.
“One weakness of such models is that, unlike humans, they are unable to learn multiple tasks sequentially,” the authors note. “The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence.”
For the study, the researchers — led by James Kirkpatrick, a research scientist at DeepMind — trained a model on a sequence of tasks by weighing the importance of previous tasks. The researchers found the trained model could retain knowledge, even if it did not experience a task for a period of time.
“We hope that this research represents a step toward programs that can learn in a more flexible and efficient way,” the authors write in a DeepMind blog post. “We have shown that catastrophic forgetting is not an insurmountable challenge for neural networks.”
More articles on health IT:
AHIMA releases health IT guidelines to support LGBT patients
Almost 18k records exposed in Metropolitan Urology Group ransomware attack
Insiders responsible for 58% of healthcare data breaches in February: 5 insights