Thursday, June 24, 2010

AI and its Discontents

       I love the AI community. Still working diligently after all these years, always believing the answer is just around the corner. Sort of like Fusion researchers. I expect they will figure it out... eventually. 
      They’ve been quite successful with various types of expert systems, but since the AI community keeps subdividing itself into smaller and smaller specialties working on smaller and smaller problems, I’m not sure if they’ll ever be able to put all the pieces back together again to create a system with general intelligence. And general intelligence is a requirement for autonomous AI. 
It could happen tomorrow. 
        But I’m not holding my breath. 
In the very early days of AI they thought it was simply a question of the number of components and connections. A neuron is nothing but a simple switch, right? only two possible states, right? fire/no fire, on/off, polarized/depolarized, sooo if we just connect enough switches together it should follow that the system will become self aware.
       Or something like that.
      (There is the minor issue that each neuron/switch may be connected to ten thousand+ other switches, but we'll just ignore that inconvenient little detail for the moment.)
There’s even a short story in the early SF literature about the mechanical switch phone system, (Yes, my children, telephone switching technology was originally mechanical!), of the period achieving self awareness after the total number of mechanical switches created enough potential connectivity to exceed the (unspecified) critical connectivity number.  
      Abracadabra, poof! self awareness. 
     Can’t for the life of me remember the author or title at the moment. Made a bit of an impression. The system announced its awakening by ringing every phone on the planet. That was its birth cry. 
      (As you will have gathered, the story was written pre common knowledge of software and programming.)
Didn’t turn out to be that simple, as anyone watching the growth of the web should have noticed. Billions of processors, millions of computers, a frickin universe of software, and the search algorithms still don’t come up to the level of a good reference librarian. 
      Often frustrating as hell.
     Things keeps improving, but still, I don’t want 2.648 x 10^6 hits. I mean shit, no one looks through more than the first three pages of results. But still, here we are with an entire planet worth of connections, server farms, mainframes, cloud computing, all talking to each other at the speed of light, and still no evidence of self awareness. Not even a hint of it. So it can’t merely be the number of processors. 
And speaking of processors, how about the chess projects. 
When the computers finally won against the Grand Masters, it was an interesting achievement. But wasn’t, by any stretch of the imagination, anything even remotely resembling AI.
Unfortunately the Media, in search of the elusive sound bite, totally missed that point. 
     The designers freely admitted their computers were not using AI. That it was too hard a nut to crack. And also, strangely enough, Chess was chosen because it is one of the easier games for computers to deal with - the game of Go, on the other hand…. (Tho I do believe they have made progress on that front.)
Success in chess was the result of brute processing power dedicated to a single task.  They used a bunch, (that’s the technical term), of processor chips running in parallel, to crunch a lot, (another technical term), of numbers at very high speed, thus allowing the system to basically play out every legal response to the Grand Masters move to end game, and also, (and I love this, I really do), the folks who designed the hardware and software were allowed to tweak the program as the games were being played. So, I have to ask, was it really ChipTest and his descendants? or the programmers? As far as I can see, they only proved that a machine can win at chess if it can run almost all the possible permutations in a reasonable amount of time. Which is a good thing, after all, It’s no good if we die of old age before the damn machine makes the next move. 
  But it cast ZERO light on how a human Grand Master wins. 


      Tonight on Thursday Night at the Fights!
  We have - In the Blue Corner!
This year, adding 100 more processors running in parallel, with faster clocking, (42 teraflops more per second!), running our new and improved software, using hundreds of watts of electricity, producing thousands of Btus of waste heat,  and weighing in at over 100 pounds of metal, silicon, plastic, and this that and the other.
"The Current Hardware!
 
      And, The Challenger, In the Red Corner!!!  We have!!!
The same brain as last year!!! (Also known as, "Some Russian Guy"!)
Weighing in at about three pounds! That wrinkly grey blob of organic matter that runs on glucose and oxygen, consuming almost enough power to light a twenty watt light bulb. Yes! That three pound blob of organic matter that can “calculate” approximately three positions per second.  That three pound blob of jello, dedicating a minor part of itself to solving the problems of the game at hand. 
      Yes! It's “The Same Old Wetware!” 
 
The computer is a specialist, it plays chess. That’s it. That is all it can do.
The human Chess Master not only plays chess, the Chess Master can drive a car, talk, walk, learn, think about what he wants for dinner, and, look at the sunset and appreciate its beauty. (Well, that last one is a stretch, admittedly.) The total physical area of his or her brain actually involved in playing the game might equal the surface area of one of the chips in Big Blue. (Or whatever the current generation is called.) 
     It's just not fair.
But, if we do succeed in building AI with general intelligence, (and I'm fairly certain we eventually will), there’s this still this minor problem - I think of it as the fly, (or bug, if you will), in the ointment. Potentially, a really big, really ugly fly - that anyone who gives the matter a moment of serious thought should see. If the system is to be totally autonomous, it must want to do something. It has to have some sort of drive. It must have desire. (It doesn't need to know why it wants to do something - that will come later - but it does have to want to do something.)
There’s a syndrome called akinetic mutism. It occurs in cases of stroke or other damage to the frontal lobes, or damage to the cingulate gyrus. Though it is also called Coma Vigil, it is not what most people would consider a coma. The people affected have fairly normal sleep / wake cycles, and during waking they are conscious, they will follow people moving about the room with their eyes. Most people who present the syndrome die. But in the few instances where folks have returned to the land of the active, and been interviewed, they report that they were aware of what was going on around them, but they had no desire to do anything. The systems were in place, data was being processed, awareness existed, but they did nothing. They understood when they were requested to perform actions, but still they did nothing. They say they did not experience a desire to respond. They simply didn’t want to do anything.
So, it would seem, the design of an autonomous AI system must include, oddly enough, a desire module, and also, if it is to survive in the real world outside the controlled conditions of the lab, (long enough to achieve its goal), an instinct for self preservation. 
Here’s the big ugly fly.
The moment someone, (and if we succeed in AI it will be a someone), wants or desires something, has a goal, or an objective, in a world of other someones, who have competing, conflicting, or mutually exclusive goals or desires, there will be conflict, it will be unavoidable, and always potentially lethal. (How bad do you want it? What would you do for it? How far would you go?)
     Asimov invented the "Laws of Robotics" as a sort of end run around this problem, hard wired in, as it were. And they made for good stories,


      But -

The behavior of any system that complex, with that many lines of code*, that amount of processing power, and that degree of necessarily flexible architecture, (capable of reprogramming and rewiring itself on the fly, in real time), will be inherently unpredictable. 
     We can't currently predict the behavior of unintelligent software being run on unintelligent hardware. (If we could, we could design systems would never crash or lock up.) And, if certain hypotheses are true, we will never be able to. The best we will be able to achieve will be probabilistic predictions, aka "best guesses". It’s N^n beyond the Halting Problem. 
If you design a system to learn, as soon as you turn it on it will begin to change, and you will no longer be able to predict what it will do with any certainty. (Any parent knows this.)
And here’s another thing. Just like any of us, the AI program can only know the world through its senses. Whatever senses we may chose to give it, or eventually it may choose for itself. 
See where I’m going with this?
And so, when we have something at least as complex as a human brain, with the brains N^n potential states, not all of those states will be "healthy states".  This is inevitable given the law of unintended consequences. (And it is a hard and fast law.) So what do we do when, (not if, but when), our AI program becomes psychotic and delusional? Especially if we've given it control over something very important? 
  
    It will happen. I guarantee it.


* I am not a believer in the idea that AI can be found in software. Or that software can ever achieve intelligence. I'm fairly certain that brains run on minimal software, and that intelligence is the result of a flexible neural architecture with continuous remodeling re-weighting capability, and genetically coded, (for lack of a better word), firmware