Sept. 25, 2009 -- Will computers soon think like us? Will computers soon think for us?
My hunch is that the latter will arrive long before the former.
What got me thinking about this was the comment this week, covered throughout the mainstream media, by Intel Chief Technology Officer Justin Rattner: "There will be a surprising amount of machines that do exhibit human-like capabilities. Not to the extent of what humans can do today, but in an increasing number of areas these machines will show more and more human-like intelligence, particularly in the perceptual tasks. So yeah, at some point, assuming all kinds of advances and breakthroughs, it's not inconceivable we'll reach a point that machines do match human intelligence."
Read that a couple times and you'll realize that Rattner has hedged and covered his bets about six different ways -- but that didn't keep publications from running headlines saying that, in the case of Network World: "Machines could ultimately match human intelligence, says Intel CTO"
Well, yes, ultimately…
Ray Kurzweil: Singularity Will Arrive in About 20 Years
But how far away is that moment, that "singularity", when computers easily pass the Turing Test – i.e., when communicating with them is indistinguishable from speaking to a human being?
The most famous prognosticator on the subject, scientist and writer Ray Kurzweil, has predicted the singularity will arrive in about twenty years or so.
At that point, he says, we will be able to map all of the charges in all of the neurons of our brains, and then port them over to computers … and thus give ourselves not only enhanced cognitive powers, but also a kind of immortality.
Even if this scenario seems a bit ghastly to you (as it does to me), the logic behind it seems pretty sound. After all, we've now been under the regime of Moore's Law for more than forty years …and like a Timex watch it just keeps on ticking away, doubling the power of everything digital every couple years.
And since Moore's Law is exponential, that power curve is also getting more and more vertical – which means that each one of those performance jumps is now huge and getting even bigger.
Already, as the Network World article itself noted, computers are exhibiting characteristics far beyond anything in human imagination. The first 'petaflog' – i.e., a quadrillion operations per second – supercomputers were delivered earlier this year, and now designers are working on 'exaflop' – that's a quintillion, or 1,000,000,000,000,000,000 operations per second – computers.
Those are 'sands on all of the world's beaches' kinds of numbers; or, more impressively, every heartbeat of every human being that has ever lived on Earth.
Why Wouldn't Computers Start Thinking?
So, when you consider numbers like that …yeah, why wouldn't these computers start actually thinking at some point?
And, given that most experts now predict that Moore's Law could keep going for another 20 years more, it seems a pretty safe bet that someday out there we'll cross an invisible threshold and one of our biggest computers will suddenly start whispering, "Cogito ergo sum" and our world will change forever.
In light of all that, Rattner's comments, far from being radical, actually seem pretty conservative. It almost seems as if the safer bet is to put your money on the advent of thinking machines
Sure, there are some technical problems in the way. For example, the human brain neurons are linked all over the place their fellow neurons, while silicon transistors are much more linear. But that's hardware/software solution that seems pretty solvable.
And, of course, there's always the nagging concern that somewhere out there Moore's Law is simply going to crash into a heretofore hidden law of physics, an insurmountable technical barrier, and will be stopped in its tracks. But, I've been around Silicon Valley as long as Moore's Law has been in existence – and I've seen one after another of those physical roadblocks predicted, reached and punched through.
Every day, though you don't read it in the general press, scientists at Intel, HP, IBM or some university comes up with a new way to make an electronic switch – organic, quantum, out of just a couple atoms, etc. – that suggests we are already working on the solutions to those problems we haven't yet found.
My gut tells me that, somehow, human ingenuity will make sure that Moore's Law will outlive most of the people reading this column – unless, of course, in the meantime we do reach Singularly, port our brains onto computers, and become immortal.
So, should we then assume that we are on the brink of the age of truly thinking, even conscious, machines? Well, not so fast.
Even as all of these technological advances are taking place, I can't help sensing that something else is going on out there in the world of science and tech as well. It is a growing feeling that perhaps a number of our smug certainties are now panning out the way they were supposed to.
Take exobiology. Every clever school kid over the last thirty years has heard about the Drake Equation (devised by scientist Frank Drake around the same time as Gordon Moore proposed his Law).
This equation suggests that if take the number of stars in the Milky Way and then start dividing it down by various liklihoods – if it has planets, if those planets are the right size and distance from the sun, if they have the right chemistry, etc. – you will eventually come up with a number …a very big number, it seems … of the number of planets in our galaxy that have intelligent life.
So convincing is this equation that it has sparked a massive search (SETI being the most famous example) for our intelligent counterparts out there ever since. And yet …nothing. Of course, you can make a lot of convincing arguments about why we haven't found anyone out there. And yet …nothing.
Needless to say, that could all change tomorrow if one of our big radio telescope were to pick up, say, the Alpha Centauri equivalent of the "Jack Benny Show." But for now you can help but sense a growing unease among researchers that just maybe the Drake equation is wrong, that there is some missing X factor we haven't considered that throws the whole model out the window.
Lately, despite all of the predictions about the Singularity and comments like Rattner's, I'm getting a similar vibe from the computing world – a frustration that, despite the amazing power of the latest generation of processors and computers, they are no more awake and aware than an HP-35 calculator of 1977.
The Internet itself is, after all, the biggest computational engine ever devised, and yet it is still as dead as a doornail. It seems pretty obvious that it is not going to wake up anytime soon in some kind of Colossus: Forbin Project nightmare of a sentient computer taking over the world.
As a number of observers have noted, today's computers, a dozen generations advanced from the first computational machines and millions of times more powerful, are no more intelligent than their predecessors; rather, they are just faster, with more sophisticated software.
Why is that? If today's most powerful computers are even half as smart as the human brain, why don't they exhibit the sentience of say, my cat, or a lizard? Is it because we haven't bolted enough peripheral sensors and systems (vision, touch, locomotion) to these computers to let them 'inhabit' the natural world? Would that wake them up?
Nothing to date suggests that it will – no matter how far out we go on the curve of Moore's Law. Why?
I have some ideas. For one thing, I'm not convince our brains are really computational engines, but instead a very sophisticated balancing act between empirical functions (Mathematical – i.e. X=2), language functions (Metaphorical – i.e., X=Y is true), and truth-telling functions (Metaphysical – i.e., based on everything I have experienced X does not =Y).
But even if you dropped a machine with such architecture and a thousand sensors into the natural world, it seems to me there is no evidence that it would 'awaken'.
It might become supremely adaptive to its environment, and capable of rapidly responding to new challenges …but still never know of its own existence. As with life in the universe, with thinking machines we may forever be unable to discover that missing X factor.
On the other hand, and I think this is what Rattner was also suggesting, we already do have several billion thinking 'machines' in the world: human brains.
And as bio-silicon interfaces become more successful, there is every reason to believe that we may use wireless modems, implantable chips and other devices to enhance the processors we already have in our heads.
Moore's Law seems to suggest we can do this one – and though we not find Kurzweil's Singular immortality, we may be able to stuff enough experience in the short time we've got in this world to make it seem like forever.
This is the opinion of the columnist and in no way reflects the opinion of ABC News.
Michael S. Malone is one of the nation's best-known technology writers. He has covered Silicon Valley and high-tech for more than 25 years, beginning with the San Jose Mercury News as the nation's first daily high-tech reporter. His articles and editorials have appeared in such publications as The Wall Street Journal, The Economist and Fortune, and for two years he was a columnist for The New York Times. He was editor of Forbes ASAP, the world's largest-circulation business-tech magazine, at the height of the dot-com boom. Malone is the author or co-author of a dozen books, notably the best-selling "Virtual Corporation." Malone has also hosted three public television interview series, and most recently co-produced the celebrated PBS miniseries on social entrepreneurs, "The New Heroes." He has been the ABCNews.com "Silicon Insider" columnist since 2000.