Self-Reinforced Meta Learning for Belief Generation, Research and Development

Alexander Gkiokas, Alexandra Cristea, Matthew Thorpe

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Contrary to common perception, learning does not stop once knowledge has been transferred to an agent. Intelligent behaviour observed in humans and animals strongly suggests that after learning, we self-organise our experiences and knowledge, so that they can be more efficiently reused; a process that is unsupervised and employs reasoning based on the acquired knowledge. Our proposed algorithm emulates meta-learning in-silico: creating beliefs from previously acquired knowledge representations, which in turn become subject to learning, and are further self-reinforced. The proposition of meta-learning, in the form of an algorithm that can learn how to create beliefs on its own accord, raises an interesting question: can artificial intelligence arrive to similar beliefs, rules or ideas, as the ones we humans come to? The described work briefly analyses existing theories and research, and formalises a practical implementation of a meta-learning algorithm
Original languageEnglish
Title of host publicationResearch and Development in Intelligent Systems XXXI
Pages185-190
Number of pages6
ISBN (Electronic)9783319120690
Publication statusPublished - Jan 2014

Keywords

  • Meta Learning
  • Reinforcement Learning
  • Inductive Learning
  • Conceptual Graphs
  • Cognitive Agents
  • Complex Systems

Fingerprint

Dive into the research topics of 'Self-Reinforced Meta Learning for Belief Generation, Research and Development'. Together they form a unique fingerprint.

Cite this