[ad_1]
We are living through an AI renaissance thought wholly unimaginable just a few decades ago — automobiles are becoming increasingly autonomous, machine learning systems can craft prose nearly as well as human poets, and almost every smartphone on the market now comes equipped with an AI assistant. Oxford professor Michael Woolridge has spent the past quarter decade studying technology. In his new book, A Brief History of Artificial Intelligence, Woolridge leads readers on an exciting tour of the history of AI, its present capabilities, and where the field is heading into the future.
Excerpted from A Brief History of Artificial Intelligence. Copyright © 2021 by Michael Woolridge. Excerpted by permission of Flatiron Books, a division of Macmillan Publishers. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Robots and Rationality
In his 1962 book, The Structure of Scientific Revolutions, the philosopher Thomas Kuhn argued that, as scientific understanding advances, there will be times when established scientific orthodoxy can no longer hold up under the strain of manifest failures. At such times of crisis, he argued, a new orthodoxy will emerge and replace the established order: the scientific paradigm will change. By the late 1980s, the boom days of expert systems were over, and another AI crisis was looming. Once again, the AI community was criticized for overselling ideas, promising too much, and delivering too little. This time, the paradigm being questioned was not just the “Knowledge is power” doctrine that had driven the expert systems boom but the basic assumptions that had underpinned AI since the 1950s, symbolic AI in particular. The fiercest critics of AI in the late 1980s, though, were not outsiders but came from within the field itself.
The most eloquent and influential critic of the prevailing AI paradigm was the roboticist Rodney Brooks, who was born in Australia in 1954. Brooks’s main interest was in building robots that could carry out useful tasks in the real world. Throughout the early 1980s, he began to be frustrated with the then prevailing idea that the key to building such robots was to encode knowledge about the world in a form that could be used by the robot as the basis for reasoning and decision-making. He took up a faculty position at MIT in the mid-1980s and began his campaign to rethink AI at its most fundamental level.
THE BROOKSIAN REVOLUTION
To understand Brooks’s arguments, it is helpful to return to the Blocks World. Recall that the Blocks World is a simulated domain consisting of a tabletop on which are stacked a number of different objects—the task is to rearrange the objects in certain specified ways. At first sight, the Blocks World seems perfectly reasonable as a proving ground for AI techniques: it sounds like a warehouse environment, and I daresay exactly this point has been made in many grant proposals over the years. But for Brooks, and those that came to adopt his ideas, the Blocks World was bogus for the simple reason that it is simulated, and the simulation glosses over everything that would be difficult about a task like arranging blocks in the real world. A system that can solve problems in the Blocks World, however smart it might appear to be, would be of no value in a warehouse, because the real difficulty in the physical world comes from dealing with issues like perception, which are completely ignored in the Blocks World: it became a symbol of all that was wrong and intellectually bankrupt about the AI orthodoxy of the 1970s and 1980s. (This did not stop research into the Blocks World, however: you can still regularly find research papers using it to the present day; I confess to have written some myself.)
Brooks had become convinced that meaningful progress in AI could only be achieved with systems that were situated in the real world: that is, systems that were directly in some environment, perceiving it and acting upon it. He argued that intelligent behavior can be generated without explicit knowledge and reasoning of the kind promoted by knowledge-based AI in general and logic-based AI in particular, and he suggested instead that intelligence is an emergent property that arises from the interaction of an entity in its environment. The point here is that, when we contemplate human intelligence, we tend to focus on its more glamorous and tangible aspects: reasoning, for example, or problem solving, or playing chess. Reasoning and problem solving might have a role in intelligent behavior, but Brooks and others argued that they were not the right starting point from which to build AI.
Brooks also took issue with the divide-and-conquer assumption that had underpinned AI since its earliest days: the idea that progress in AI research could be made by decomposing intelligent behavior into its constituent components (reasoning, learning, perception), with no attempt to consider how these components worked together.
Finally, he pointed out the naivety of ignoring the issue of computational effort. In particular, he took issue with the idea that all intelligent activities must be reduced to ones such as logical reasoning, which are computationally expensive.
As a student working on AI in the late 1980s, it seemed like Brooks was challenging everything I thought I knew about my field. It felt like heresy. In 1991, a young colleague returning from a large AI conference in Australia told me, wide-eyed with excitement, about a shouting match that had developed between Ph.D. students from Stanford (McCarthy’s home institute) and MIT (Brooks’s). On one side, there was established tradition: logic, knowledge representation, and reasoning. On the other, the outspoken, disrespectful adherents of a new AI movement—not just turning their backs on hallowed tradition but loudly ridiculing it.
While Brooks was probably the highest-profile advocate of the new direction, he was by no means alone. Many other researchers were reaching similar conclusions, and while they did not necessarily agree on the smaller details, there were a number of commonly recurring themes in their different approaches.
The most important was the idea that knowledge and reasoning were deposed from their role at the heart of AI. McCarthy’s vision of an AI system that maintained a central symbolic, logical model of its environment, around which all the activities of intelligence orbited, was firmly rejected. Some moderate voices argued that reasoning and representation still had a role to play, although perhaps not a leading role, but more extreme voices rejected them completely.
It is worth exploring this point in a little more detail. Remember that the McCarthy view of logical AI assumes that an AI system will continually follow a particular loop: perceiving its environment, reasoning about what to do, and then acting. But in a system that operates in this way, the system is decoupled from the environment.
Take a second to stop reading this book, and look around. You may be in an airport departure lounge, a coffee shop, on a train, in your home, or lying by a river in the sunshine. As you look around, you are not disconnected from your environment and the changes that the environment is undergoing. You are in the moment. Your perception—and your actions—are embedded within and in tune with your environment.
The problem is, the knowledge-based approach doesn’t seem to reflect this. Knowledge-based AI assumes that an intelligent system operates through a continual perceive-reason-act loop, processing and interpreting the data it receives from its sensors; using this perceptual information to update its beliefs; reasoning about what to do; performing the action it then selects; and starting its decision loop again. But in this way, an AI system is inherently decoupled from its environment. In particular, if the environment changes after it has been observed, then it will make no difference to our knowledge-based intelligent system, which will stubbornly continue as though nothing had changed. You and I are not like that. For these reasons, another key theme at the time was the idea that there should be a close-coupled relationship between the situation that the system finds itself in and the behavior that it exhibits.
[ad_2]
Source link