The Unintended Consequences of Rationality

by Leah Burrows, Harvard University

A century of economic theory assumed that, given their available options, humans would always make rational decisions. Economists even had a name for this construct: homo economicus, the economic man.

Have you ever met a human? We’re not always the most rational bunch. More recent economic theory confronts that fact, taking into account the importance of psychology, societal influences and emotion in our decision-making.

In an interview, Harvard School of Engineering and Applied Sciences professor David C. Parkes contends rational models of economics are applicable to artificial intelligence (AI). He notes, for example, the revelation principle–which theorizes that the design of economic institutions can be limited to those where it is in the best interest of participants to truthfully reveal their utility functions–may become more manifest in AI systems.

Still, Parkes acknowledges, “we don’t believe that the AI will be fully rational or have unbounded abilities to solve problems. At some point you hit the intractability limit–things we know cannot be solved optimally–and at that point, there will be questions about the right way to model deviations from truly rational behavior.”

Parkes notes rational AI systems could potentially make better property sale and purchase decisions than people, based on research to develop an AI to build a model of people’s preferences via elicitation.

He also notes an AI observing someone’s behavior can start building a preference model through the process of inverse reinforcement learning. Parkes says economic AIs must solve problems that are given complexity due to other system participants, and he warns rationality can lead to unintended results.  Read the article

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.