Can computers learn language like humans?

Tuesday, November 8, 2016

from ACTFL Smartbrief

Machines may never master the distinctly human elements of language

Artificial intelligence is difficult to develop because real intelligence is mysterious. This mystery manifests in language, or “the dress of thought” as the writer Samuel Johnson put it, and language remains a major challenge to the development of artificial intelligence.
“There’s no way you can have an AI system that’s humanlike that doesn’t have language at the heart of it,” Josh Tenenbaum, a professor of cognitive science and computation at MIT told Technology Review in August.
In September, Google announced that its Neural Machine Translation (GNMT) system can now “in some cases” produce translations that are “nearly indistinguishable” from those of humans. Still, it noted:
“Machine translation is by no means solved. GNMT can still make significant errors that a human translator would never make, like dropping words and mistranslating proper names or rare terms, and translating sentences in isolation rather than considering the context of the paragraph or page.”
In other words, the machine doesn’t entirely get how words work yet

In young infants, language builds on basic abilities like perceiving the world visually and physically, acting on motor systems, and understanding other peoples’ goals. Beyond compiling pure data input, the mind filters, assimilates, and joins new information to memory to create and break patterns, as well as processing information through emotional and social filters.
From a cognitive perspective, to re-create human thinking, machines must mimic human learning with mental model building and psychology components. Technologists do try to duplicate the human thinking process in machines using “neural networks,” or layers of sensitive interconnected components that copy brain function. These systems can now recognize objects, animals, or faces easily. But recognizing words is much more difficult, according to Fei-Fei Li, director of the Stanford Artificial Intelligence Lab.
Li works on databases of images tagged with descriptions like “crack in the road” or “dog on a skateboard” that teach machines. She isn’t convinced that the gap between human and machine intelligence can be bridged with the neural networks in development now, not when it comes to language. Li points out that even young children don’t need visual cues to imagine a dog on a skateboard or to discuss one, unlike machines.
For machines to get closer to that human understanding of language, Li says, AI researchers will need to consider intelligence comprehensively, somehow integrating emotional and social understanding, abstraction, and creativity, in addition to raw information. And that will take a while.
In its 100 Year Study of AI, a Stanford University panel assessed the future of machine intelligence, writing that while recent developments in natural language processing, knowledge representation and reasoning have been impressive, “the portrayals of artificial intelligence that dominate films and novels, and shape the popular imagination, are fictional…there is no race of superhuman robots on the horizon or probably even possible.”
For now, it seems, true intelligence, with language at its core, remains the domain of creative humans with fantastical imaginations and an appreciation of style.

from qz.com