First let’s look at some examples, and then consider how they might be combined.
NeuroSky have an EEG headset called MindWave that reads brainwaves – a mobile version that connects to PCs and Mac computers, as well as iOS and Android devices, sells for only $99.99.
- The assistive potential of a gesture sensor from Thalmic Labs that I wrote about in an earlier post was also used to fly a model helicopter; see Jedi assistive health technologies.
On the fun side, there are also some unexpected applications of NeuroSky technologies, such as a set of ‘brainwave cat ears,’ that respond to your emotional state, swivelling to indicate to what you are paying attention such as donuts, members of the other sex, etc. Take a look at this video about Necomimi:
A recent application was demonstrated by a team lead by Prof. John Chaung at Berkeley to create a “pass-thought” – a certain pattern of thoughts that can be used to generate a computer-access password – see this article for more.
Another way to access mood or emotional state is to measure skin conductance. The Affectiva Q Sensor, “is a wearable, wireless biosensor that measures emotional arousal via skin conductance, a form of electrodermal activity that grows higher during states such as excitement, attention or anxiety and lower during states such as boredom or relaxation. The sensor also measures temperature and activity.”
Migraine monitoring is another potential health application. See this article for more.
Affectiva also have some interesting technology to determine mood, using video monitoring of a person’s face. An early application is to analyze responses to advertisements. You can try it yourself by having your response to Super Bowl commercials analyzed here: http://www.affectiva.com/affdex/?#pane_tryit
In an earlier post, I discussed the health applications of Google Glass, a wearable headset that records what you see and presents data in your visual field. I can’t help but wonder if a second camera that concurrently records a person’s mood using an Affectiva technology would be useful to provide input to Glass in addition to verbal commands and location context. This might be much like we have two cameras on smartphones – one on each side – one to record the environment, and one to record the user.
Speaking of Glass, it wouldn’t be hard to add a NeuroSky EEG sensor to the headset as another input device. I’m not sure about the ears though…
Gestures could be sensed through a Thalmic Labs wrist band, perhaps combining an Affectiva skin sensor, then add an EEG sensor… You get the idea.
Humans, and indeed all animals, depend on the integration of continuous data from many senses. We should expect no less from developing bio-sensor extensions. We will have a network of integrated sensors that extend our ‘natural’ senses and integrate with our nervous responses.