«Intentional systems theory is in the first place an analysis of the meanings of such everyday ‘mentalistic’ terms as ‘believe,’ ‘desire,’ ...»
Intentional Systems Theory
Intentional systems theory is in the first place an analysis of the meanings of such everyday
‘mentalistic’ terms as ‘believe,’ ‘desire,’ ‘expect,’ ‘decide,’ and ‘intend,’ the terms of ‘folk
psychology’ (Dennett 1971) that we use to interpret, explain, and predict the behavior of
other human beings, animals, some artifacts such as robots and computers, and indeed
ourselves. In traditional parlance, we seem to be attributing minds to the things we thus interpret, and this raises a host of questions about the conditions under which a thing can be truly said to have a mind, or to have beliefs, desires and other ‘mental’ states. According to intentional systems theory, these questions can best be answered by analyzing the logical presuppositions and methods of our attribution practices, when we adopt the intentional stance toward something. Anything that is usefully and voluminously predictable from the intentional stance is, by definition, an intentional system. The intentional stance is the strategy of interpreting the behavior of an entity (person, animal, artifact, whatever) by treating it as if it were a rational agent who governed its ‘choice’ of ‘action’ by a ‘consideration’ of its ‘beliefs’ and ‘desires.’ The scare-quotes around all these terms draw attention to the fact that some of their standard connotations may be set aside in the interests of exploiting their central features: their role in practical reasoning, and hence in the prediction of the behavior of practical reasoners.
1. The three stances The distinctive features of the intentional stance can best be seen by contrasting it with two more basic stances or strategies of prediction, the physical stance, and the design stance.
The physical stance is simply the standard laborious method of the physical sciences, in which we use whatever we know about the laws of physics and the physical constitution of the things in question to devise our prediction. When I predict that a stone released from my hand will fall to the ground, I am using the physical stance. In general, for things that are neither alive nor artifacts, the physical stance is the only available strategy, though there are important exceptions, as we shall see. Every physical thing, whether designed or alive or not, is subject to the laws of physics and hence behaves in ways that in principle can be explained and predicted from the physical stance. If the thing I release from my hand is an alarm clock or a goldfish, I make the same prediction about its downward trajectory, on the same basis. Predicting the more interesting behaviors of alarm clocks and goldfish from the physical stance is seldom practical.
Alarm clocks, being designed objects (unlike the stone), are also amenable to a fancier style of prediction—prediction from the design stance. Suppose I categorize a novel object as an alarm clock: I can quickly reason that if I depress a few buttons just so, then some hours later the alarm clock will make a loud noise. I don’t need to work out the specific physical laws that explain this marvelous regularity; I simply assume that it has a particular design—the design we call an alarm clock—and that it will function properly, as designed. Design-stance predictions are riskier than physical-stance predictions, because of the extra assumptions I have to take on board: that an entity is designed as I suppose it to be, and that it will operate according to that design—that is, it will not malfunction.
Designed things are occasionally misdesigned, and sometimes they break. (Nothing that happens to, or in, a stone counts as its malfunctioning, since it has no function in the first place, and if it breaks in two, the result is two stones, not a single broken stone.) When a designed thing is fairly complicated (a chain saw in contrast to an ax, for instance) the moderate price one pays in riskiness is more than compensated for by the tremendous ease of prediction. Nobody would prefer to fall back on the fundamental laws of physics to predict the behavior of a chain saw when there was a handy diagram of its moving parts available to consult instead.
An even riskier and swifter stance is the intentional stance, a subspecies of the design stance, in which the designed thing is treated as an agent of sorts, with beliefs and desires and enough rationality to do what it ought to do given those beliefs and desires. An alarm clock is so simple that this fanciful anthropomorphism is, strictly speaking, unnecessary for our understanding of why it does what it does, but adoption of the intentional stance is more useful—indeed, well-nigh obligatory—when the artifact in question is much more complicated than an alarm clock. Consider chess-playing computers, which all succumb neatly to the same simple strategy of interpretation: just think of them as rational agents who want to win, and who know the rules and principles of chess and the positions of the pieces on the board. Instantly your problem of predicting and interpreting their behavior is made vastly easier than it would be if you tried to use the physical or the design stance. At any moment in the chess game, simply look at the chessboard and draw up a list of all the legal moves available to the computer when its turn to play comes up (there will usually be several dozen candidates). Now rank the legal moves from best (wisest, most rational) to worst (stupidest, most self-defeating), and make your prediction: the computer will make the best move. You may well not be sure what the best move is (the computer may ‘appreciate’ the situation better than you do!), but you can almost always eliminate all but four or five candidate moves, which still gives you tremendous predictive leverage. You could improve on this leverage and predict in advance exactly which move the computer will make—at a tremendous cost of time and effort—by falling back to the design stance and considering the millions of lines of computer code that you can calculate will be streaming through the CPU of the computer after you make your move, and this would be much, much easier than falling all the way back to the physical stance and calculating the flow of electrons that result from pressing the computer’s keys.
But in many situations, especially when the best move for the computer to make is so obvious it counts as a ‘forced’ move, you can predict its move with well-nigh perfect accuracy without all the effort of either the design stance or the physical stance.
It is obvious that the intentional stance works effectively when the goal is predicting a chess-playing computer, since its designed purpose is to ‘reason’ about the best move to make in the highly rationalistic setting of chess. If a computer program is running an oil refinery, it is almost equally obvious that its various moves will be made in response to its detection of conditions that more or less dictate what it should do, given its larger designed purposes. Here the presumption of excellence or rationality of design stands out vividly, since an incompetent programmer’s effort might yield a program that seldom did what the experts said it ought to do in the circumstances. When information systems (or control systems) are well-designed, the rationales for their actions will be readily discernible, and highly predictive—whether or not the engineers that wrote the programs attached ‘comments’ to the source code explaining these rationales to onlookers, as good practice dictates. We needn’t know anything about computer-programming to predict the behavior of the system; what we need to know about is the rational demands of running an oil refinery.
2. The broad domain of the intentional stance The central epistemological claim of intentional systems theory is that when we treat each other as intentional systems, using attributions of beliefs and desires to govern our interactions and generate our anticipations, we are similarly finessing our ignorance of the details of the processes going on in each other’s skulls (and in our own!) and relying, unconsciously, on the fact that to a remarkably good first approximation, people are rational. We risk our lives without a moment’s hesitation when we go out on the highway, confident that the oncoming cars are controlled by people who want to go on living and know how to stay alive under most circumstances. Suddenly thrust into a novel human scenario, we can usually make sense of it effortlessly, indeed involuntarily, thanks to our innate ability to see what people ought to believe (the truth about what’s put before them) and ought to desire (what’s good for them). So second-nature are these presumptions that when we encounter a person who is blind, deaf, self-destructive or insane, we find ourselves unable to adjust our expectations without considerable attention and practice.
There is no controversy about the fecundity of our folk-psychological anticipations, but much disagreement over how to explain this bounty. Do we learn dozens or hundreds or thousands of ‘laws of nature’ along the lines of “If a person is awake, with eyes open and facing a bus, he will tend to believe there is a bus in front of him,” and “Whenever people believe they can win favor at low cost to themselves, they will tend to cooperate with others, even strangers,” or are all these rough-cast laws generated on demand by an implicit sense that these are the rational responses under the circumstances? In favor of the latter hypothesis is the fact that whereas there are indeed plenty of stereotypic behavior patterns that can be encapsulated by such generalizations (which might, in principle, be learned seriatim as we go through life), it is actually hard to generate a science-fictional scenario so novel, so unlike all other human predicaments, that people are simply unable to imagine how people might behave under those circumstances. “What would you do if that happened to you?” is the natural question to ask, and along with such unhelpful responses as “I’d probably faint dead away” comes the tellingly normative “Well, I hope I’d be clever enough to see that I should …” And when we see characters behaving oh so cleverly in these remarkably non-stereotypical settings, we have no difficulty understanding what they are doing and why. Like our capacity to understand entirely novel sentences of our natural languages, our ability to make sense of the vast array of human interactions bespeaks a generative capacity that is to some degree innate in normal people.
We just as naturally and unthinkingly extend the intentional stance to animals, a non-optional tactic if we are trying to catch a wily beast, and a useful tactic if we are trying to organize our understanding of the behaviors of simpler animals, and even plants. Like the lowly thermostat, as simple an artifact as can sustain a rudimentary intentional stance interpretation, the clam has its behaviors, and they are rational, given its limited outlook on the world. We are not surprised to learn that trees that are able to sense the slow encroachment of green-reflecting rivals shift resources into growing taller faster, because that’s the smart thing for a plant to do under those circumstances. Where on the downward slope to insensate thinghood does ‘real’ believing and desiring stop and mere ‘as if’ believing and desiring take over? According to intentional systems theory, this demand for a bright line is ill-motivated.
3. Original intentionality versus derived or ‘as if’ intentionality Uses of the intentional stance to explain the behavior of computers and other complex artifacts are not just common; they are universal and practically ineliminable. So it is commonly accepted, even by the critics of intentional systems theory, that such uses are legitimate, so long as two provisos are noted: the attributions made are of derived intentionality, not original or intrinsic intentionality, and (hence) the attributions are, to one degree or another metaphorical, not literal. But intentional systems theory challenges these distinctions, claiming that (1) there is no principled (theoretically motivated) way to distinguish ‘original’ intentionality from ‘derived’ intentionality, and (2) there is a continuum of cases of legitimate attributions, with no theoretically motivated threshold distinguishing the ‘literal’ from the ‘metaphorical’ or merely
The contrast between original and derived intentionality is unproblematic when we look at the paradigm cases from everyday life, but when we attempt to promote this mundane distinction into a metaphysical divide that should apply to all imaginable artifacts, we create serious illusions. Whereas our simpler artifacts, such as painted signs and written shopping lists, can indeed be seen to derive their meanings from their functional roles in our practices, and hence not have any intrinsic meaning independent of our meaning, we have begun making sophisticated artifacts such as robots, whose trajectories can unfold without any direct dependence on us, their creators, and whose discriminations give their internal states a sort of meaning to them that may be unknown to us and not in our service.
The robot poker player that bluffs its makers seems to be guided by internal states that function just as a human poker player’s intentions do, and if that is not original intentionality, it is hard to say why not. Moreover, our ‘original’ intentionality, if it is not a miraculous or God-given property, must have evolved over the eons from ancestors with simpler cognitive equipment, and there is no plausible candidate for an origin of original intentionality that doesn’t run afoul of a problem with the second distinction, between literal and metaphorical attributions.
The intentional stance works (when it does) whether or not the attributed goals are genuine or natural or ‘really appreciated’ by the so-called agent, and this tolerance is crucial to understanding how genuine goal-seeking could be established in the first place.