Mobile Device Sensors and Context

There is a wide range of parties interested in sensor-based input from mobile devices. These range from major IT/Internet giants – Apple, Google, Intel – to smaller, emerging companies and numerous academic centers.

For one example, in late 2011, Google hired away from Apple a former MIT expert in “wearable” devices. MIT had developed the MIThril project, which included work on a context-aware cell phone, a Real-Time Context Engine for systematically classifying a person’s actions, and a hardware platform “that combines body-worn computation, sensing, and networking in a clothing-integrated design.”

In another example of interaction in the sensor/context awareness space, each company, Apple and Google, had earlier filed patent applications for using sensor data from a smartphone in various ways to modify the phone’s performance or offer options to the user. A diagram of the Google app is shown below and illustrates the phone presumably adapting to the user’s daily routine throughout the day.

Diagram From Google Context-Aware Smarthphone Patent Application

[lightbox type=”image” src=”https://www.mobilecloudera.com/wp-content/uploads/2013/06/mob-dev-sensors.jpg”][/lightbox]

An eloquent elaboration of the vision of how sensors and context-awareness software may some day contribute to enhancing users’ experiences was also presented by an expert from Intel.

Imagine a device that uses a variety of sensory modalities to determine what you are doing at an instant, from being asleep in your bed to being out for a run with a friend. By combing hard sensor information such as where you are and the conditions around you combined with soft sensors such as your calendar, your social network and past preferences, future devices will constantly learn about who you are, how you live, work and play. As your devices learn about your life, they can begin to anticipate your needs. Imagine your PC advising you leave the house 10 minutes early for your next appointment due to a traffic tie-up on your way to work. Consider a ‘context aware’ remote control that instantly determines who is holding it and automatically selects the Smart TV preferences for that person. All this may sound like science fiction, but this is the promise of ‘context-aware’ computing and we can already demonstrate much of it in the lab.
Google offers a ContextToolkit to developers, which it describes as: “The Context Toolkit aims at facilitating the development and deployment of context-aware applications. By context, we mean environmental information that is part of an application’s operating environment and that can be sensed by the application. The Context Toolkit consists of context widgets and a distributed infrastructure that hosts the widgets. Context widgets are software components that provide applications with access to context information while hiding the details of context sensing.”

In addition to the commercial interest in sensors and context-awareness, a great deal of work is being done in various academic settings.

One of the better-documented examples of work with accelerometers is an ongoing project at Fordham University, WISDM (Wireless Sensor Data Mining). The project, in its initial stage, implemented three broad steps.

1. Collection. They collected accelerometer readings every 50 ms (20 samples per second) from a group of 29 users.

2. Classification. They created examples by dividing the readings into 10-second segments. They then generated 43 features, based on the readings. Each reading contains information on three axes of motion: forward movement, up and down movement, horizontal movement.

3. Analysis. They analyzed six activities: walking, jogging, ascending stairs, descending stairs, sitting and standing.

While this is a highly focused project, it is interesting for at least two reasons. First, they initially relied solely on smartphones for the data gathering. Other experimental work has incorporated usage of sensors attached to subjects’ bodies. Here the device, in normal use, is mostly in the pocket of the subject. Secondly, the project was viewed as an application of data mining, which contemplates scaling to massive amounts of data over time. (Note that the project also is focusing on uses of GPS sensor data from the devices and plans to extend its work to other mobile devices and to develop input from “audio sensors (microphones), image sensors (cameras), light sensors, proximity sensors, temperature sensors, pressure sensors, direction sensors (compasses) and various other sensors that reside on these devices.”

The study reported a high degree of accuracy in recognizing: jogging and walking (which were the two most common activities of the participants) as well as sitting and standing. The stair climbing and descending activities were not as clearly recognized, in part because it was difficult to tell them apart.

It should be noted that there are obvious limitations to measuring activities with accelerometers. They do not indicate activities of the hands or feet; the phone may be in someone’s purse or on a table for extended periods of time.

The project team has moved ahead with other aspects of experimentation. They are considering how to put some of the software functionality into the phone, allowing, for example, personalization of the data, by the user; or running the classifier software in the phone, with the user training it. The group expects to demonstrate an active tracker, which will include software code resident on a smart device, a server and user interface to the web.

The team envisions apps that are primarily related to health, including, for example, being able to keep a personal profile of exercise, also personal apps, such as being able to customize the phone or other device based upon the activity of the individual at the moment “e.g., sending calls directly to voicemail if a user is jogging.” The data could also be valuable for recognizing a person’s gait, including characteristics such as a limp and monitoring their movement, or recognizing a fall by a person. By June 2013, the team had concluded what it described as its third stage of gait research.