MENU
Maestro Electro

Artificial Intelligence: I Have My Robots Worry for Me (Part 1)

January 25, 2017 • Features

When I was a communications engineer in the Canadian air force, I occasionally amused myself by telling senior pilots that I felt their jobs could be replaced by a computer.  “Ho, ho, ho, young captain, it’s really not that simple” was generally the tone of the response.  Yet here we are.  Robotic drones are flying in swarms for the USAF, China is flight testing an autonomous passenger airplane, and consumers can even buy a drone to follow them around with a GoPro.

Now that I’m a civilian, I’ve had similar responses to my prognostications in different fields.  A few years ago I tried to get the CEO of an insurance carrier interested in artificial intelligence for rating insurance risks, and got the same kind of dismissal.  I gather underwriting is too complicated to replace with a computer.  One hears it all the time: “This job is as much art as science.”

Before we unpick that, let’s make sure we’re clear what we’re talking about.  Artificial Intelligence is the topic of the moment.  Every new year’s listicle of technology to watch in 2017 mentions AI, from mashable to Gartner (where it’s their top 3).  So just what is it?  AI is one of those slippery terms that we all think we understand, but when we start digging into it we may find that we’re all talking about different things.  Part of the problem is that we don’t actually agree on what intelligence is, so the question of what we’re trying to simulate is not well-defined. Artificial Intelligence is often just a media shorthand that covers any sophisticated and powerful algorithm.  For the purposes of this article – and what people are thinking of when the term is bandied about generally – any system that includes the following characteristics or components is unequivocally AI:

  • expert systems, which gather facts and rules to simulate reasoning in a specific area;
  • machine learning, which infers the facts and rules out of large data sets;
  • neural networks, which seek to find meaningful patterns in unstructured (or informally structured) data; and
  • genetic algorithms, which refine a procedure based on how close a previous iteration has come to an ideal outcome.

These are the tools and techniques.  To understand where a system fits on the continuum between a speech-to-text tool and HAL 9000, it’s useful to consider the sophistication of the internal model that the system needs to manage.   Arend Hintz, an MSU professor writing for The Conversation, offers this hierarchical classification:

  • reactive machines simply need to view a snapshot of the world and act accordingly based on their rules;
  • limited memory systems add in a time dimension, so can take action based on developing behavior in the input;
  • AIs with a theory of mind can make judgments about the objects in their model, ascribing thoughts and motivations that affect their behavior; and
  • self-aware machines extend that understanding of objects to themselves.

From this standpoint, Siri, Cortana, Alexa and their friends, and even the most complex business applications you’ve heard of, are the least sophisticated kind of AI.  Expert systems use deep learning and neural networks to build their rules but are still only reactive.  Driverless cars make it to the second level since they need to know where an object came from to judge where it’s going; but it’s unproven whether even the most advanced AI encompasses what a cognitive psychologist would deem theory of mind.  We have a long way to go to reach Skynet.

Notwithstanding that AI systems are still flunking the Turing test, I think the underwriters, like the pilots, are wrong.  When people call their work “art” it’s usually because they don’t understand why they make the choices they do. If your job is truly an art, Picasso, then I agree you’re not likely to be automated; but if calling your professional skills an art is just easier than explaining them, your position may be vulnerable.

Certainly there’s no reason you need to understand the reasons for everything you do so long as you’re successful.  You may have those Malcolm Gladwell “Blink” moments where you come to a decision by processing a lot of data intuitively.  As it turns out, however, computers are also good at processing a lot of data.  If you’re acting on feelings and hunches, even though founded in training and long experience, we now have the tools to absorb the rules you’re applying – consciously or unconsciously – and can program a system to use them to reach your expert conclusions.  Even better, when this happens the process weeds out the bad rules, the old wives’ tales we’ve absorbed and the biases to which weak human minds are prone, and creates a system that applies the best rules consistently.  No fear of signing the deal on Friday the 13th, no special treatment for old friends or pretty new ones.  And AI’s don’t require performance reviews, compliance training and annual raises, or get tired, catch the flu, grow distracted, or come to work hung over.

But just because you could be replaced by a computer, that doesn’t mean you’ll be out of a job.  I’ll talk about this in part 2.

 

One Response to Artificial Intelligence: I Have My Robots Worry for Me (Part 1)

  1. […] anyone can be replaced by a computer (as I implied in part 1), shouldn’t I be worried that I could lose my job to one of Asimov’s heirs?  After […]

Leave a Reply

Your email address will not be published. Required fields are marked *

« »