Andy Clark at Edge:
I was enthralled by Dennett and Chalmers’ recent discussion of the threats and prospects regarding artificial superintelligences. Dennett thinks we should protect ourselves by doing all we can to keep powerful AIs operating at the level of suggestion-making tools, while Chalmers is impressed by the market forces that will probably push us into devolving more and more responsibility to these opaque and alien minds. But I felt as if their picture of the space of possible AI minds could be usefully refined, and with that in mind I’d like to push on two further dimensions.
The first is action. Agents that can act on their (real or simulated) worlds can choose “epistemic” actions that both test and improve their model of that world. A simple example might be a robot equipped with a camera and an arm that can push and prod objects in its field of vision. Such a robot can actively create sensorimotor flows that help reveal objects as integrated wholes distinct from their backgrounds and from other objects. These systems, simple versions of which have been explored by Giorgio Metta and others, possess a crucial but under-appreciated capacity, which is to use their own worldly actions to refine or disambiguate information both for learning and during practical action.