“Deep Learning” – Driving Mobile Cloud Development

In addition to the large global commercial forces driving mobile clouds there are almost weekly developments in new areas fundamentally linked to the mobile cloud sector.

One such development is “Deep Learning,” which embraces the next evolution in cloud-based assistants.

The area hyped as “Deep Learning” (and similar developments such as those referred to by IBM as “Cognitive Computing” encompasses current software development in both the academic world and in the intelligent assistant area.

Deep Learning describes a systemic approach to heuristics software platforms of machine intelligence (sometimes referred to as “learning”), which mimic, as best we can, how neural networks work. This is an attempt to develop not just interactive, voice recognition – speaker independent applications, but also using the analogy of human neural networks of synapses and neurons, to create software which learns from an ever-increasing array of data sources. These sources include language processing, image recognition and a host of other “big data” envelopes.

In mimicking the neural networks of the brain, algorithms and an architecture of abstraction layers are developed to mimic, beginning in a primitive way, a human model of how to go from the abstract to the more concrete.

While the clear mapping of these software developments to actual synaptic human thought processes is not clear to the authors, the need for a dramatic expansion of the device/cloud interaction is.

Beyond the academic interest in deep learning, has been the activities of Internet giants:

Google – acquisition of DeepMind. Google has just acquired DeepMind, a British very early stage software company for reportedly about $500 million.

Why? The answer and the valuation are, for most of us, at the very least an “enigma wrapped in a mystery.” The best clue as to why this would be important to Google is the human talent that has been acquired.

The FT tried to give a product side to this acquisition by stating that DeepMind had “already developed a neural network that learned to play 7 different Atari 2600 computer games (with more skill than a human) using only raw pixels as input and no game specific information.”’

We cannot vouchsafe for this report, but we can say that such a leap in heuristic capability in a device would be astonishing.

The immediate impact, if truly achieved within the processing capabilities of mass market mobile cloud computing, would be immediately applicable to the next stage of search, Google’s business.

The announcement of DeepMind has brought to the limelight the CEO Demis Hassabis, whose YouTube lectures are probably the best logical presentation for linking the biological mimicry approach to machine intelligence that we have seen to date

It should be noted that with all the commentary on Dr Hassabis’ work and artificial intelligence, his work for his doctorate, on the nature of memory, imagination and episodic remembrance has a fascinating extension to AR and the concept that reality may simply be an imaginative state.

This past summer, Google also announced a landmark in artificial intelligence (a deep learning synonym) in applying a self-taught architecture to thousands of photos on YouTube. The software recognized, definitively over 3200 items correctly a 70% improvement over previous approaches. Google stated that over 1000 servers sorted through 10 million images. The goal is likely to be an application linked to a service such as Google Now and recognizing with voice interaction what you are seeing and its relevance to your profile on your smartphone.

It is interesting to note that Geoffrey Hinton, one of the world’s leading thinkers and algorithmic developers in this field of deep learning works part time for Google as a Distinguished Researcher, after Google acquired his company DNNresearch.

Apple – the continued evolution of Siri. We have pointed out earlier that the intelligent assistant “Siri” of Apple must of necessity exist with a strong device/cloud set of linkages, including a logical structure which identifies what can be done (answered) on the smartphone and what needs the added ICT resources of the cloud.

It is clear that as Siri expands in capability and uses, not only through Apple-centric cloud resources but also public and private cloud platforms, this concept of deep learning will play an ever-greater role in the mobile cloud.

Facebook – Facebook, in an interview with MIT Technology Review in September 2013, indicated that as the data sets are increasing in size, people are getting more friends and with the advent of mobile, deep learning could be used to help people organize their Facebook accounts, for example, their photos. All of this would be done in the Facebook cloud.

Yahoo – acquired LookFlow in October 2013 to apply deep learning for photo recognition.


* Photo by Rob Bulmahn (Email) [CC-BY-2.0], via Flickr Creative Commons