Computational Goals, Values and Decision-Making

Research output: Contribution to journalReview article


Considering the popular framing of an artificial intelligence as a rational agent that always seeks to maximise its expected utility, referred to as its goal, one of the features attributed to such rational agents is that they will never select an action which will change their goal. Therefore, if such an agent is to be friendly towards humanity, one argument goes, we must understand how to specify this friendliness in terms of a utility function. Wolfhart Totschnig (Fully Autonomous AI, Science and Engineering Ethics, 2020), argues in contrast that a fully autonomous agent will have the ability to change its utility function and will do so guided by its values. This commentary examines computational accounts of goals, values and decision-making. It rejects the idea that a rational agent will never select an action that changes its goal but also argues that an artificial intelligence is unlikely to be purely rational in terms of always acting to maximise a utility function. It nevertheless also challenges the idea that an agent which does not change its goal cannot be considered fully autonomous. It does agree that values are an important component of decision-making and explores a number of reasons why.
Original languageEnglish
JournalScience and Engineering Ethics
Publication statusPublished - 4 Aug 2020


Dive into the research topics of 'Computational Goals, Values and Decision-Making'. Together they form a unique fingerprint.

Cite this