One prescription for solving a problem well is:

- State the problem, in the simplest way possible. In particular, this statement should involve no contamination with or anticipation of the solution.
- Think about solutions to the stated problem.

Stating a problem in a succinct and crisp manner tends to invite a simple elegant solution. When a problem can not be stated succinctly, we wonder if the problem is even understood. (And when a problem is not understood, we wonder if a solution can be meaningful.)

Reinforcement learning does step (1) well. It provides a clean simple language to state general AI problems. In reinforcement learning there is a set of actions *A*, a set of observations *O*, and a reward *r*. The reinforcement learning problem, in general, is defined by a conditional measure *D( o, r | (o,r,a) ^{*})* which produces an observation

*o*and a reward

*r*given a history

*(o,r,a)*. The goal in reinforcement learning is to find a policy

^{*}*pi:(o,r,a)*mapping histories to actions so as to maximize (or approximately maximize) the expected sum of observed rewards.

^{*}-> aThis formulation is capable of capturing almost any (all?) AI problems. (Are there any other formulations capable of capturing a similar generality?) I don’t believe we yet have good RL solutions from step (2), but that is unsurprising given the generality of the problem.

Note that solving RL in this generality is impossible (for example, it can encode classification). The two approaches that can be taken are:

- Simplify the problem. It is very common to consider the restricted problem where the history is summarized by the previous observation. (aka a “Markov Decision Process”). In many cases, other restrictions are added.
- Think about relativized solutions (such as reductions).

Both methods are options are under active investigation.

A famous mathematician once said (I think it was Von Neumann)

“I only feel like I really understood a problem when I can state it in 7 languages”