Trying to reach general AI through just decision-making (rewards) instead of using a diversity of paradigms

avid Silver, Satinder Singh, Doina Precup, Richard S. Sutton, Reward is enough, . Artificial Intelligence, Volume 299, 2021 DOI: 10.1016/j.artint.2021.103535.

In this article we hypothesise that intelligence, and its associated abilities, can be understood as subserving the maximisation of reward. Accordingly, reward is enough to drive behaviour that exhibits abilities studied in natural and artificial intelligence, including knowledge, learning, perception, social intelligence, language, generalisation and imitation. This is in contrast to the view that specialised problem formulations are needed for each ability, based on other signals or objectives. Furthermore, we suggest that agents that learn through trial and error experience to maximise reward could learn behaviour that exhibits most if not all of these abilities, and therefore that powerful reinforcement learning agents could constitute a solution to artificial general intelligence.

NOTES:

  • The computational and physical limitations of the agent to cope with a too complex world is the main reason to use learning instead of pre-built knowledge (evolution): it allows the agent to focus on acquiring skills for its own circumstances first, that are the most important for it.
  • Argument why classification (supervised learning) is less powerful and efficient than RL.
  • Same with multi-agent settings vs. one agent confronted with a single complex environment (containing other agents).

Comments are closed.

Post Navigation