Big Data & Machine Learning - Keys 2 AI Progress

 

After a short pause, I would like to go back to artificial intelligence advances and learning algorithms. Moreover, it is almost every day that we get to know about progress made in this field. All the top technology firms including Google, Facebook, IBM and Baidu are diving deeper into the subject with the more and more stunning results.

Still for the algorithms to learn, big amounts of data are required. As stated at the very beginning of 2015, big data is one of the technologies of the year that deserves the most attention and hype. Amazing thing is that big data volumes are now specially created for algorithm learning purposes. And, as explained by the MIT Technology Review, it is “easier said than done”[i], as for natural language processing text databases should be annotated so that algorithms could parse through the volumes of data. “The annotation must describe the content of the text but without appearing within it. To understand the link, a learning algorithm must then look beyond the mere occurrence of words and phrases but also at their grammatical links and causal relationships”[ii].

Google DeepMind, a pioneer in the domain, have opted for a different and organic solution – they process articles from Daily Mail and CNN online that have typical organization and structure and allow algorithms to learn the pattern easily. Also, such content is authentic and available in big volumes. The database has been used for comparing traditional NLP techniques measuring the distance between combinations of words and the neural network approaches that we have extensively discussed this year, with the result that is not surprising but convincing. It “clearly shows how powerful neural nets have become … the best neural nets can answer 60 percent of the queries put to them”[iii].

Other experiments show even more impressive results. TechCrunch and WIRED have both reported Facebook’s and Google’s artificial intelligence advances in the area of image recognition. Neural networks are not only capable of recognizing images but can also follow the pattern for re-constructing images. According to Rob Fergus, Facebook artificial intelligence researcher, “It understands the structure of how images work”[iv]. And the experiments go even further. To make the learning process closer to the one of the human brain, the researchers confront two neural networks: “one network is built to recognize natural images, and the other does its best to fool the first”[v]. This process is also available after the neural networks process a big number of images and make out patterns of the huge databases.

But why is it so important to have a structured data corpus in order to make intelligent algorithms learn? What does it bring? The example of machine translation techniques explained by the New York Times Magazine[vi] shows clearly how it works.

In translation it is often all about context. Traditional rule-based machine translation took the context into account, but it also bases on the glossary and takes the terms’ translations in there. The statistical strategy (like the one behind Google and Microsoft instant translation solutions) pays more attention to the context and environment. The following example shows extensively how it works: “The English word “bank,” to use one frequent example, can mean either “financial institution” or “side of a river,” but these are two distinct words in French. When should it be translated as “banque,” when as “rive”? A probabilistic model will have the computer examine a few of the other words nearby. If your sentence elsewhere contains the words “money” or “robbery,” the proper translation is probably “banque.”[vii]

All these achievements mean that artificial intelligence continue to make its place in the modern world and the neural networks are learning fast enough to be able to process lots of available data and produce knowledge and experience. Once again, it is in the combination of technologies in a powerful solution that help the researchers to progress and go further with even more intelligent algorithms.

 

image source: pixabay.com


[i] Google DeepMind Teaches Artificial Intelligence Machines to Read by Hannah Devlin for The Guardian by Emerging Technology From the arXiv for MIT Technology Review, June 17, 2015, online http://www.technologyreview.com/view/538616/google-deepmind-teaches-artificial-intelligence-machines-to-read/, accessed on June 22, 2015

[ii] Google DeepMind Teaches Artificial Intelligence Machines to Read by Hannah Devlin for The Guardian by Emerging Technology From the arXiv for MIT Technology Review, June 17, 2015, online http://www.technologyreview.com/view/538616/google-deepmind-teaches-artificial-intelligence-machines-to-read/, accessed on June 22, 2015

[iii] Google DeepMind Teaches Artificial Intelligence Machines to Read by Hannah Devlin for The Guardian by Emerging Technology From the arXiv for MIT Technology Review, June 17, 2015, online http://www.technologyreview.com/view/538616/google-deepmind-teaches-artificial-intelligence-machines-to-read/, accessed on June 22, 2015

[iv] Facebook’s New AI Can Paint, But Google’s Knows How to Party by Cade Metz for WIRED, June 19, 2015, online http://www.wired.com/2015/06/facebook-googles-fake-brains-spawn-new-visual-reality/, accessed on June 22, 2015

[v] Facebook’s New AI Can Paint, But Google’s Knows How to Party by Cade Metz for WIRED, June 19, 2015, online http://www.wired.com/2015/06/facebook-googles-fake-brains-spawn-new-visual-reality/, accessed on June 22, 2015

[vi] Is Translation an Art or a Math Problem? by http://www.nytimes.com/2015/06/07/magazine/is-translation-an-art-or-a-math-problem.html?_r=0

[vii] Is Translation an Art or a Math Problem? by http://www.nytimes.com/2015/06/07/magazine/is-translation-an-art-or-a-math-problem.html?_r=0