Monday, December 14, 2015

Computer Program Copies Humans Through Pioneer Human Learning Style


Summary: A computer program copies humans through a pioneer human learning style leading to visual Turing test success, according to a Dec. 11 study in Science.


human and machine learning of handwritten characters from alphabets around the world; artwork by Danqing Wang: Science Magazine @sciencemagazine via Twitter Dec. 10, 2015

A study published online Thursday, Dec. 11, 2015, in Science announces successful passing of visual Turing tests as a pioneer computer program copies humans in one-shot learning and rich representations of characters from the world’s alphabets.
Five challenging concept learning tasks test the efficacy of the program in teaching the computer to recognize and reproduce known characters and to generate new symbols. Devised for the study, Omniglot is a data set of multiple examples of 1,623 handwritten characters collected from 50 writing systems. The computer’s program alerts the computer to outlines of alphabet characters as differentiated by such variables as number, shape and trajectory of their pen strokes.
One of the five tasks involves one-shot classification for 10 different alphabets. A scan of a set of 20 distinct characters needs to yield a match between a single image presented of each new character and the same character in typical handwritten format. The Bayesian program yields an average error rate of 3.3 percent, which exceeds the human participants’ average error rate of 4.5 percent.
In a visual Turing test of 49 trials, 147 judges compare nine human-generated representations of a new concept based upon a singly-viewed example with nine computer-generated drawings. Presented by pioneer British computer scientist Alan Mathison Turing in 1950, the Turing test considers a machine’s ability as an indistinguishable imitator of human intelligence. With ideal performance set at 50 percent and worst performance at 100 percent, the results show an average level of 52 percent for judges’ discerning human versus computer creative output.
A probabilities-based, Bayesian program learning (BPL) framework provides the study’s computer with the essential learning-to-learn approach by representing key causal and compositional concepts with probability algorithms (formulas) that build concepts compositionally from parts, subparts, and spatial relations. The Bayesian learning approach allows for generating new models from existing models.
The authors note: “In short, BPL can construct new programs by reusing the pieces of existing ones, capturing the causal and compositional properties of real-world generative processes operating on multiple scales.”
Human learning often occurs from one-shot models, whereas machine learning occurs from deep learning models. People often may learn new concepts by generalizing from a single basic example. Contrastingly, machines tend to require hundreds of examples in their programmed learning process.
Human learning further differs from machine learning in the use of new concepts as building blocks for further creativity. People tweak new concepts through rich, follow-up representations. For example, new concepts may be shaped into new or related concepts, or they may be broken down, or parsed, into component parts and relations. The study’s three co-authors note that “people seem to navigate this trade-off with remarkable agility, learning rich concepts that generalize well from sparse data.”
Despite breakthrough advances in artificial intelligence, machine learning fails to yield the flexible, rich representations that so easily appear in human learning. A critical missing piece in machine learning is the ability to generalize rich concepts from limited data.
The authors find that their Bayesian approach helps to fill in the missing piece, as their computer program copies humans in one-shot learning and creative representations.
They observe that “Machine learning and computer vision researchers are beginning to explore methods based on simple program induction, and our results show that this approach can perform one-shot learning in classification tasks at human-level accuracy and fool most judges in visual Turing tests of its more creative abilities.”

Brendan M. Lake of New York University's Psychology Department and Center for Data Science offers Bayesian Program Learning (BPL) as a model for developing human-like machine learning: samim ‏@samim via Twitter Dec. 11, 2015

Acknowledgment
My special thanks to talented artists and photographers/concerned organizations who make their fine images available on the internet.

Image credits:
human and machine learning of handwritten characters from alphabets around the world; artwork by Danqing Wang: Science Magazine‏ @sciencemagazine via Twitter Dec. 10, 2015, @ https://twitter.com/sciencemagazine/status/675098984994766849
Brendan M. Lake of New York University's Psychology Department and Center for Data Science offers Bayesian Program Learning (BPL) as a model for developing human-like machine learning: samim ‏@samim via Twitter Dec. 11, 2015, @ https://twitter.com/samim/status/675293357996904448

For further information:
Beau Cronin @beaucronin. "Bayesian Program Learning for one-shot learning, in the NYT!" Twitter. Dec. 10, 2015.
Available @ https://twitter.com/beaucronin/status/675041234793209858
Carpineti, Alfredo. "Scientists Teach Computer To Learn Like Humans." IFL Science > Technology. Nov. 12, 2015.
Available @ http://www.iflscience.com/technology/scientist-teach-computer-learn-human
Lake, Brendan M.; Ruslan Salakhutdinov; and Joshua B. Tenenbaum. "Human-level concept learning through probabilistic program induction." Science, vol. 350, issue 6266 (Dec. 11, 2015): 1332-1338.
Available @ http://www.sciencemag.org/content/350/6266/1332.full
Popular Mechanics. "Brendan Lake on Machine Learning / PopMech." YouTube. Dec. 10, 2015.
Available @ http://www.youtube.com/watch?v=kzl8Bn4VtR8
samim ‏@samim. "Bayesian Program Learning model for one-shot learning." Twitter. Dec. 11, 2015.
Available @ https://twitter.com/samim/status/675293357996904448
Science Magazine‏ @sciencemagazine. "This week's Science cover feature: teaching computers to learn concepts." Twitter. Dec. 10, 2015.
Available @ https://twitter.com/sciencemagazine/status/675098984994766849


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.