MENU

Not One Big Brother But Brothers All Around

February 5, 2014 • Innovation, News

Joaquin Phoenix’s girlfriend in “Her” wouldn’t have been a program worth a fig if she didn’t have some ability to gauge her user’s emotions – so why does does it seem so alarming when Apple seeks to patent a system to assess a user’s mood?  The framing of the patent, as a means of “improving receptiveness to targeted content delivery”, pretty clearly means serving up ads with a better chance of catching some interest; the ideal artificial intelligence agent (annoyingly called an operating system in the movie) on the other hand doesn’t seem to want anything but Ted Twombly’s happiness.

When we talk about an interface being “intuitive” we mean that it makes sense to a human without their having to have any special training or knowledge.  That is, control of the system in the context just makes sense to a typical user.  An intuitive system is one where you see without training the action needed to control the system; or on the flip side, the system anticipates the user’s next action.  The more intuitive the system, the less important is the distinction between the two perspectives.

User interfaces will continue to improve, and that means they’ll continue to augment their level of intuition.  They will “learn” (by enhancement or with an integral learning capacity) to anticipate us more and better.  And being that its human intuition we’re simulating, it’s inevitable that at some point an emotional component will be introduced.  Before too long, we will be surrounded by systems with an emotion-reading capability.  A system that does the same thing whether we’re depressed over divorce proceeding or dizzy with new love is probably just fine if it’s producing a global customer clothing size model or controlling the furnace; but when it’s being used for car shopping, job hunting or financial planning a range of reactions based on mood might be very helpful.

The expectation that mood-sensitive AI would benefit the application’s user presumes a benevolent “Her”-like agent; the Apple patent makes no such assumption.  But the underlying techniques apply equally to either intent.  Gathering and comparing mood-associated data to a baseline could provide any system with data upon which to make a judgement about the most appropriate or effective action.  Like all of technology, there will be ways to create real user value and there will be the potential for exploitation and misuse.  As with applications that make use of personal private information, there’s much to be gained and much to guard against.  I’m not completely sanguine about the prospect, but we will adjust to mood-reading systems.  As consumers we’ll find a balance between using and being invaded.  As managers and innovators of these systems, we’ll have to temper application with ethical judgment.

Post script: If machines are to become sensitive to our emotional state, does that mean we need to be considerate of their feelings?  It does seem to follow.  The Social Qs column of the New York Times raised this question last weekend.  Whatever Siri may feel about her treatment at our hands, the way people treat a machine could at the very least be revealing of character.

Leave a Reply

Your email address will not be published. Required fields are marked *

« »