GPT-2 and the Nature of Intelligence

Gary Marcus in The Gradient:

Consider two classic hypotheses about the development of language and cognition.

One main line of Western intellectual thought, often called nativism, goes back to Plato and Kant; in recent memory it has been developed by Noam ChomskySteven PinkerElizabeth Spelke, and others (including myself). On the nativist view, intelligence, in humans and animals, derives from firm starting points, such as a universal grammar (Chomsky) and from core cognitive mechanisms for representing domains such as physical objects (Spelke).

A contrasting view, often associated with the 17th century British philosopher John Locke, sometimes known as empiricism, takes the position that hardly any innateness is required, and that learning and experience are essentially all that is required in order to develop intelligence. On this “blank slate” view, all intelligence is derived from patterns of sensory experience and interactions with the world.

In the days of John Locke and Immanuel Kant, all of this was speculation.

Nowadays, with enough money and computer time, we can actually test this sort of theory, by building massive neural networks, and seeing what they learn.

More here.

Like what you're reading? Don't keep it to yourself!
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Share on Reddit
Reddit
Share on LinkedIn
Linkedin
Email this to someone
email