From a point of view of Artificial General Intelligence, RL learners like Hutter’s universal, Pareto optimal, incomputable AIXI heavily rely on the definition of the rewards, which are necessarily given by some “teacher” to define the tasks to solve. AIXI, as is, cannot therefore be said to be a fully autonomous agent.

Furthermore, it has recently been shown that AIXI can converge to a suboptimal behavior in certain situations, hence showing the intrinsic difficulty of RL, with its non-obvious pitfalls.

We propose a new model of intelligence, the Knowledge-Seeking Agent (KSA), halfway between Solomonoff Induction and AIXI, that defines a completely autonomous agent that does not require a teacher. The goal of this agent is not to maximize arbitrary rewards, but “simply” to entirely explore its world in an optimal way. A proof of strong asymptotic optimality for a class of horizon functions shows that this agent, unlike AIXI in its domain, behaves according to expectation. Some implications of such an unusual agent are proposed.