SENIOR PHILOSOPHY THESIS: ARTIFICIAL INTELLIGENCE


"Artificial Intelligence" is a term and an idea that has elicited much controversy over the years. The idea of programming something to think seems to bring out disturbing assumptions of mind that some do not want to face. The thought of programing a functional mind assumes that the mind works in ways like a computer. This leads to comparisons of the human mind being a form of computer. Cognitive science goes along with this idea that the mind is a sort of computer. It seems that the major thrust behind arguments for and against the possibility of artificial intelligence is that intelligence must be like human cognition. To be intelligent one must be in some way like humans, and react in human ways. It comes down to an interesting criterion: for something to be considered intelligent, it must act like a human being in behavior and cognitive style. A computer program will be considered intelligent if it can fool a human being. I believe that this is not necessarily the case, that intelligent thought can be separate from being like a human. I believe we are concentrating too much on human-like programs and not enough on intelligence itself.

This paper argues from the position that Owen J. Flanagan, Jr. calls the "Suprapsychological" AI argument. I am arguing that intelligence does not have to be human or human-like in nature, that we have focused too much on creating a computerized human, not something that is a computer intelligence. According to this view, "Once a program is conceived, written, and run on a computer it achieves psychologically real status in the broadest possible sense." A mind does not need to be like a human one to be intelligent.

The belief that being human-like is a way of measuring intelligence for computers can be said to have started with Alan Turing. In his essay "Computing Machinery and Intelligence", published originally in 1950, he puts forth a test for computer intelligence. It is called the imitation game. It is a test designed with three individuals -- one a judge, the one another human, and a third is a computer programmed to try to fool the judge into thinking it is a human being. Turing believes that, if the computer could fool the human judge, that the computer could be considered to think. But to fool a human judge the programmed computer would need to have the background information that a human would. It would need to have an understanding of the world that humans live in and a knowledge of human concepts. He avoids the question of a non-human like intelligence by stating:

"May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection."

This totally avoids the question by placing the human mind as the measure. Turing believed that the mind was digital in nature, and that a computer, as a universal machine, could duplicate the operations of another machine. So he believed that the human mind could be imitated. In the Turing test, the man is the measure of the machine. To be intelligent, the computer would have to be able to mimic the functions of the human mind.

Searle also goes by the theory that intelligence must be human in nature. He denies the possibility of computer intelligence for the very reason that it must be human to be intelligent. He states that a machine can think- humans are such thinking machines. He says that a digital computer can think- Humans are such machines. But a computer cannot think solely because it has the right type of program, because it has no intentionality, it is only syntax without semantics

Intentionality is a biological phenomenon. The computer program has no understanding of what it does, it is not thinking about anything. It just follows rules that have been programmed into it. It is like, states Searle, a man locked in a room, with a pile of symbols and a rule book in which to put out in response to incoming symbols. The person has no idea or understanding that the symbols are Chinese. The people outside may be fooled into believing that the person understands the language, but he does not have, in fact, a clue of what is going on. He states that manipulating symbols is not enough. He even states that the computer program, like the man in the room, is not even manipulating symbols, because symbols stand for something. To both the program and the man in the room, the symbols have no meaning. They are both simply moving bits of data around according to rules -- rules without understanding, syntax without semantics, objects without meaning.

Searle also connects intentionality to biological causes:

" `Could a machine think?' My own view is that only a machine could think, and indeed only very special kinds of machines, namely brains and machines that had the same causal powers as brains. And that is the main reason strong AI has had little to tell us about thinking, since it has nothing to tell us about machines. By its own definition, it is about programs, and programs are not machines. Whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena.... the mind they suppose is a matter of formal processes and is independent of quite specific material causes in the way that milk and sugar are not."

Margaret Boden answers these statements rather well in my opinion. To answer the claim that only biological systems can have "causal powers", she points out that this is an assumption. We do not know the brain as well as we know how photosynthesis works. It is not obvious that nothing else could do the same function in another environment. Searle may claim that perhaps something could do something like photosynthesis on another world, but it is not so obvious that other substances that are inorganic could not support mental processes or have intentionality. Searle claims his point is obvious and logical, and expects that others must agree with him, but this argument is not as obviously true as he seems to believe.

Now to go into the other objection Searle has to artificial intelligence. Searle claims that a system cannot have intentionality solely because it has the right type of program, that there must be something more than formal symbol manipulation. The person in the room is only following rules. Boden points out several problems with this argument. For one "...the instantiation of a computer program, whether by man or by manufactured machine, does involve understanding- at least of the rule-book.". She makes the further point "...that natural languages can be characterized in procedural terms, ... words, clauses, and sentences can be seen as mini-programs.". The man has proven he understands something of English, he can understand simple rules in the rulebook. The man in the room only really needs to know a very limited set of words, a very simple set of commands to follow. Another point raised is that Searle has assumed that there is no meaning behind the computer's actions -- it is strictly formal in nature. Boden replies:

"But what must not be forgotten is that a computer program is a program for a computer: when a program is run on suitable hardware, the machine does something as a result (hence the use in computer science of the words `instruction' and `obey'). At the level of the machine-code the effect of the program on the computer is direct, because the machine is engineered so that a given instruction elicits a unique operation (instructions in high-level languages must be converted into machine-code instructions before they can be obeyed). A programmed instruction, then, is not a mere formal pattern -- nor even a declarative statement (although it may for some purposes be thought of under either of those descriptions). It is a procedure specification that, given a suitable hardware context, can cause the procedure in question to be executed."

Now, it seems to be a basic assumption that to be intelligent a computer must be able to act as a human being in the very least, if not constructed as one. In essence, we are asking that the computer program BE HUMAN. We are asking this of something that will not have a human body, of something that need not be human-like at all. Why this distinction in intelligence? Why say that all thought must be human in style? Why say that a program, or group of programs, must respond in human terms, in human ways? It would seem that the assumption is that all minds must act like a human one, no matter what context it developed from. It seems to be an expectation that another mind must act like ours does, even if the mind involved comes from a different angle, from a context separate from our own.

Perhaps it would be prudent to develop a working definition of intelligence by presenting my own. I hope to create a framework to present my arguments, so that I can show why I believe there can be other intelligences that are not human in style or content.

Intelligence is an interesting term that is used to describe all sorts of things. In my definition it is, at heart, an adaptive phenomenon. It will allow the possessor to adapt to different situations better than something without it. It can use information to react to different situations to allow it to have a greater chance of survival. It will allow for the organism to alter its behaviors in response to changes in the environment. It can gather information about the environment, and apply it to situations that are encountered, to weigh possibilities and act in an adaptive manner.

Intelligence involves a sort of memory, an ability to remember past experiences and to use information gained to apply it to current circumstances. In effect, the possessor has a sort of memory that stores past experiences. It can also retrieve information it needs in a timely manner for use. Information stored could be as simple as symbolic representation of what something is, or as complex as actual memory of the event, complete with memory of why something happened. At its simplest, memory is a repertoire of symbols, used to determine what things are and can be used in understanding things. This form of memory is the basis for language, and is a basis for thought. This does not mean that this is always done, only that it can and does from time to time do this.

Intelligence has some sort of motivation. Motivation is essentially the why of action. It is what is behind a purposeful action (or reaction). Motivation is goal-directedness. Motivation is the moving force of intelligence. For my purposes, a very good example of a simple motivation is survival. Surviving an environment, being able to keep on existing, is a sort of motivation that causes the intelligence to be used. Motivations could be put into a computer program, and could be as simple as collecting a type of information, or more general survival in a sort of environment. At a certain point the motivation could be self-programmed. It is, in essence, goal-directedness that gives a reason for actions that are conducted.

The environment that the intelligence bearer is in has a strong impact on intelligence itself. The environment, the world that the individual interacts with, is the place that the symbols for thought come from. The environment is the obstacle that the intelligence develops against. With an environment, a mind has something to do, it has something that it must deal with to survive. (Through this paper, unless stated otherwise, I am going to be using survival as a basic motivation for action). I believe that, at least at first, language symbols come from the environment for the most part. Thought must be about something, must be intentional. Without a basis for thought, intelligence is nothing. The basis of thought comes from the environment. This idea is one of the main points I wish to stress. The environment that the intelligence develops in determines to a great degree the style of thought, if it will be like a human or if it will be in a different style. There can be built-in differences, there can be intelligence that is different because of a different sort of wiring. But I am going to add that the wiring does not necessarily matter, that a non-human intelligence can be unlike a human because of a different environment, because of different environmental demands. With a computer intelligence, it may have different ways of getting the environmental information. This will add to the difference in mind. But the different types of perception do not invalidate its experience of the environment or its intelligence just because it perceives differently.

An intelligent mind uses some sort of symbol "language" to store information. This storage in the memory is in symbolic form. The storage is separate from the environment that it is part of. It is the internal representation of an external world. This internal representation can have less in it than the external. This internal representation is limited, limited to what information can be picked up about the external world.

The concepts of the internal and external world bring us to another concept that I believe is connected to intelligence, which is the idea that there must be some sort of separateness, some sort of division between the bearer of intelligence and the "world" that the intelligence bearer inhabits. There must be some way to differentiate between the individual and the world, some ability to draw a line between the internal "mental" world and the world around the intelligence. It is the same sort of distinction as what is part of the "personal" and what is not. It is the perception of a division between the intelligence and what influences it externally. This may seem to be an unnecessary point, but in fact it is quite important in reference to computer intelligence. If a system, an "intelligence", does not know there is a difference between itself and the environment, how can it interact with anything that is not itself? I would say that at the very least it would need to have a symbol for itself, an idea of what it was and what was not part of it. It needs to be able to see where it ends and the world that it is interacting with begins.

I believe that this separateness implies some sort of body, something that delineates the separation. The intelligence has some sort of way of perceiving the environment it is in. Perception, in this sense, is the ability to pick up information about the surroundings. It is something that gathers information for the intelligence to use. I am using this very simple definition of perception because I believe that it is the bare minimum that can be had and be intelligent. It is the lowest form of perception that we know of. The intelligence would also have some way to interact with the environment. It not only can perceive the world that is separate, it can manipulate it in some way, I would assume. There is the idea of it having a sort of body, of it having that division between what it is, what it can perceive with and manipulate with, and the world at large. I believe it should have an idea of the body that it has, that it knows the boundaries between "itself" and what is "not self".

To summarize, intelligence is an adaptive faculty, one that allows the possessor to interact with an environment. It uses symbols of some sort to store and to refer to information, and can retrieve stored information for use in adapting to changes. It is motivated in some way, and has some sort of goal- directed behavior -- for example the "survival" of the intelligence. There is a possibility that motivations could be created by the intelligence. Moreover, it needs to have some sort of way to effect change in the environment. It needs to also have the ability to perceive an environment to interact with.

Environment, as I am using this term, has covered many different things. It can be considered a field of action, an area that affects the individual, and it is the "world" that the individual must deal with. It can be our physical world, or it could be be one formed inside a computer. Connected to the term "environment" is the term "context". Context is the perceivable world to the individual. It is the world affected by the limits of perception. It is that which is the filter that one sees the environment through. For instance, the environment could have something that we call "blue". The contextual side may not have that thing, because the individual does not have the ability to pick up that sort of information. So, context is a bit more subjective than environment. The environment, or what the individual can perceive of the environment, is the source of the information that the intelligence uses to think with and what to think about.

This definition of environment raises the important question: Does it matter if the intelligence in question is not being applied in a "physical" environment? I would say that it does not matter, for one main reason. I am going to argue that the question must be looked at from the point of view of where the intelligence is. If the world that the intelligence interacts with is in a computer, and the intelligence can perceive the world in the computer, and it can interact with that environment, then is that environment not the "physical world" for that intelligence? We say that our world is the physical world because we perceive it to be such. The world that the intelligence is in is that intelligences "physical world". This point will become relevant when confronting the question of how the programmed intelligence relates to other environments such as our own physical environment.

So far, we have been defining terms to set the basis for the rest of this paper, and laying down some requirements for intelligence. Why have we concentrated on the relationship between environment and intelligence? I am placing so much emphasis on this point because it is one of my major problems with concentrating on creating a "human" artificial intelligence. Intelligence depends on that context that it develops under, it is dependent on the sort of world it perceives. The environment gives the template for the thoughts and the style of cognition. Dreyfus had it partially correct when he said "...since intelligence must be situated it cannot be separated from the rest of human life." He goes on to say later that:

"If one thinks of the importance of the sensory-motor skills in the development of our ability to recognize and cope with objects, or of the role of needs and desires in structuring all social situations, or finally of the whole cultural background of human self-interpretation involved in our simply knowing how to pick out and use chairs, the idea that we can simply ignore this know-how while formalizing our intellectual understanding as a complex system of facts and rules is highly implausible."

Intelligence must be situated somewhere, it just does not need to be situated in a human context as he thinks it must. There are other possible contexts, such as the micro-world sort that will be discussed later. Dreyfus tries to invalidate the idea of the micro-world earlier in the same paper by stating:

"It is likewise misleading to call a set of facts and procedures concerning blocks a micro-world when what is really at stake is the understanding of what a world is. A set of interrelated facts may constitute a universe, a domain, a group, etc., but it does not constitute a world, for a world is an organized body of objects, purposes, skills, and practices in terms of which human activities have meaning or make sense. It follows that although there is a children's world in which, among other things, there are blocks, there is no such thing as a blocks world."

This definition tries to draw a line where one is not needed. Why not include all those "objects, purposes, skills" etc. under the area of those interrelated facts that make up the universe? His division seems to miss the point, in my opinion. The intelligence is a part of that world, as are the actions it conducts, etc.. There is a separateness, but there is no denying that there is also a connectedness, in that the individual influences the "world" and vice- versa. If there is a micro-world of blocks, it is a world, just very much confined to specific rules and boundaries, and to specific facts.

I believe that the first step in the right direction in the development of Artificial Intelligence was the development of the micro-world approach. This allowed for a limited world that did not include too many variables. It allows for, I believe, the designing of programs with a very basic sort of understanding by limiting possibilities. I will argue this by showing how such a micro-world program could be considered intelligent, even if this program is not intelligent to the degree we may want. It is a stupid form of intelligence, "but stupidity is virtually a species of intelligence. To be stupid is to be intelligent, but not very."

The micro-world program I would like to use is the one created by Terry Winograd in 1972. It would probably be a good idea to say at the outset that Mr. Winograd believes that "one cannot construct machines that either exhibit or successfully model intelligent behavior."

SHRDLU is a program designed to understand language. It :

"...carried on a dialogue with a person (via teletype) concerning the activity of a simulated robot arm in a tabletop world of toy objects (often referred to as the `blocks world'). The program could answer questions, carry out commands, and incorporate new facts about its world. It displayed the simulated world on a video screen, showing the activities it carried out as it moved the objects around."

This example shows many of the things I consider part of intelligence. It has a separateness from he world it inhabits. It shows this by having an ability to move things around, and to perceive the world it is in. It is separate, in that it is not part of the things to move. One could say that it is the robotic arm it simulates, that it has a separate token for itself as not part of that world it manipulates. It is the manipulator. It has a point of view that perceives where the blocks are in relation to each other. It can compare objects in that limited world to each other. It can even figure out ambiguous references by backtracking in its memory, by checking out what was asked earlier and comparing to the current situation. As Margaret Boden pointed out, SHRDLU does not know much about blocks; it is constrained by some rules that would not be adaptive in a larger context, and it cannot think of possibilities. The possibilities that it can think of are limited to its world and what it is asked to do. It has no other possible goals. SHRDLU is very limited indeed as to what it can think about. But does limitation on that which can be worked with mean that it is not intelligent, or just that its intelligence is limited by its environment? SHRDLU does not need to be very intelligent, by virtue of the fact that the world it works with is so limited.

Paul M. Churchland points out that SHRDLU does understand something about the world it is in. He states:

"A much more impressive program is Winograd's SHRDLU, which handles semantics as well as syntax, and manipulates the elements in the (simulated) block world that is all it knows. Its syntax is very sophisticated, and the program embodies some systematic information about the properties of the bodies that inhabit its world. Crudely, it knows, a little, what it is talking about. As a result, it can draw useful inferences and divine real relations, a talent that is reflected in the much more complex and keenly focused conversations one can have with it. Conversations must be restricted to the block world, however, and to those narrow aspects it encompasses. SHRDLU does not have an empty knowledge base, but its base is still less than microscopic compared with our own."

He goes on to say that the problem in understanding language is that one must have the knowledge base of the world. Which is much the point. To act as a human, to think in and understand human concepts, one needs that knowledge base. Why not make a knowledge base that is different from that of a human? Why must a knowledge base have everything we are familiar with, rather than a different sort of background? What if the knowledge base used does not need such concepts as humans use in our world, but follows different paths of knowledge?

In SHRDLU's world, it does not need much of a knowledge base. It does not really need any other goal than one provided outside of itself, in the form of requests by the programmer. This is because there are no other problems for it to work against; nothing that can affect it other than the questioner. It is limited as to what it knows about blocks because it has no need to know anything else about the blocks other than a limited idea of differences. The differences are labeled out, SHRDLU knows what we call X,Y, and Z. SHRDLU knows the rules that govern certain things, such as nothing goes on a pyramid, things go in he box, etc. These laws control its actions, just as we have rules that govern our activities. The laws of SHRDLU's world just happen to be much simpler, as are the things in that world. SHRDLU does not plan goals, but can do what is asked, and knows what it cannot do. It can understand that some goals will not work. This is a reactive system, in that it reacts to requests made of it, and does nothing else on its own. It can adapt itself to a certain degree, however. It can accept new designations for things and work with those new designations. It can be told that a box on another box is a tower, and if asked to make one, it will do so. Not incredibly adaptive, but adaptive in a primitive sense.

Arguments have been put forward that the micro-world has no relational value to the real world, that SHRDLU has no real understanding of what a block is, or what a pyramid is. The shapes have no real-world values, such as weight or mass. I do not agree with this. SHRDLU does know what a block is , it is just that it is a much simpler idea than the one we have. It could, assuming a lot of translating into physical language, relate to a simple "outside" version of its world. If you rigged a camera up, and gave SHRDLU a robotic arm to move, it would still be able to do what it does in its simple computer world. It could relate to blocks. It just has a much less involved idea and definition of what a block is.

Another point brought up by Boden is that "...there are in fact no objects...The visual display is strictly secondary to the workings of the program." I do not see this as a problem, since it is the projection that acts as a translation medium, allowing for there to be a conversation. Both parties need to know what they are talking about. The picture allows for the questioner to see something is being done, and to say what else is wanted. SHRDLU would be moving the data around regardless of the picture, the questioner would simply be unable to see it visually without the projection.

Micro-worlds are a beginning, but they are too simple to allow for more involved intelligences. They can only go so far, and to deal with more complex things more information will be needed. We need to do more towards enlarging the domain that an intelligence can work in. Churchland points out that we not only need to work on enlarging this base of knowledge, but we need to determine how new information is to be acquired, how new frameworks of conceptions are to be gotten. We need to work on some sort of selective memory, for what is useful to important. But, I would say, we are definitely on the right path to finding out how to do this.

It is time to include self-concept into understanding intelligence. It is connected to the idea of separateness from the world the intelligence is in. A self-concept is that which allows the individual to be separate from the world around it. It allows for there to be an interaction of differences. If there is not a perceived difference between the individual and the environment, there is nothing to develop and to learn against. There is nothing to cause change. The self-concept allows for there to be things that are "I" and "not I". It makes the boundaries for referencing against.

I believe that we could create a sort of micro-world that includes many more variables than we have now. Perhaps weight, dimensions, etc. could be added gradually so that a program like SHRDLU would have more of a knowledge of our sort of blocks than it would in its originally sparse environment. But a world can be as big as you like, as long as the program has a separateness from the world that is perceived. It could be considered an extended micro-world, but without many of the limitations. The more information and flexability of the environment, the more potential of the intelligence to develop.

The program, or more likely a group of interacting programs, must have some sort of spatial point of view or there can be no action. A point of view is simply an extension of the self-concept. It is something that allows for things to be separate from the self, and seeing things in relation to the point it occupies. The intelligence is not everywhere at once. It has a point in space, where it is and nothing else. It resides at a point in space where it is in relation to other things that it interacts with. An artificial intelligence would need to be able to "know" that it was separate from the rest of the world it was in, perhaps with as simple as a self-designation as being "not-blocks" or as the mover of things in the world. It would know what was it and what was not it. It makes a context which it is in, which it is in some way separate from, which allows it to learn things about its world. It may not literally take up space in that environment (SHRDLU does not) but it can be affected by changes in that environment. In SHRDLU's case, SHRDLU can manipulate things in the environment, and can perceive changes in that environment, but cannot physically be affected by changes in that world. The changes affect what SHRDLU can do in that world, what can be manipulated and changed. The environment will determine what is asked of it and what it can do to that environment.

Now to make the leap. This paper is arguing that an intelligence does not need to be like a human intelligence, that it can depend on the context the mind developed in to determine how it thinks. Let us look into the idea of context a bit more closely to understand the point.

Context is the external world to the individual, whether it be a man or a machine. The individual must be able to perceive things in the environment, so it needs some sort of information-gathering capacity. Context can be a filtered form of this external world. The world is perceived through different senses , which are what give the intelligence its reference point. The context has given the individual the knowlege base that the individual uses to understand things. For example, if there is color in the world, but the intelligence has no way to perceive color, then color is not part of the context, but it is part of the world. The unperceived or unperceivable things are still there, but they have no bearing on that individual (that they know about).

Let us assume for a moment that the thing we are talking about has intelligence to some degree. Now it needs to be able to interact with its world, so it has parts of it that can affect the environment. This includes possible movement about the environment, to move other things, etc. How does the intelligence get any thoughts? It would need to "see" things, store information on those things, and see how things interact. This would include what happens if it does something. These interactions would give it the basis to understand other things in its world, so its experience would grow in the context it developed in. This includes the learning of laws that are applicable to that context, such as cause and effect.

Now, an Artificial Intelligence, which has been developed in a computer and has a different way of interacting with its world than we do, would seem to imply that it is functioning in a different contextual framework. It only knows about our world through information put into its memory, and through information typed in afterwards. It has never encountered the things we describe to it. It is learning the code for what we call things. The AI system would only know the representations of things. I doubt it would perceive things the way we do. The information would probably be translated into a form it could understand, and have a label given to that data. The system would have no reason to have any goals about that information, unless we made the information have some sort of value to it.

The computer is learning all this data in a different format than we learn the data. While we may be learning and remembering things in a pictoral or other abstract form, the program (SHRDLU, for example) is getting the information coded into machine language. The language is translated into a form we understand, and vice-versa, through a compiler. The block world is not a block world to the program. The data translation is what allows for communication. It is allowing us to communicate already. What would happen if we did not have this sort of communication base? Probably chaos. Without an area of common context, there is no communication. To communicate, a mutual context would have to be developed, or a way of comparing contexts would need to be devised.

So we see the problem with trying to program a computer to think like a human. It has no need to think like a human to be intelligent. It can be intelligent without being in a human context. The context that it may function under may place it outside our normal experience. What we would need to do is not program a computer to think like a human, and assume that if it cannot act like a human it is not intelligent. We need to find out what contexts allow for interaction and allow for intelligence to develop.

Translations of experience can only be taken so far. SHRDLU, for example, could be taken into a context with "real" blocks, SHRDLU will have an idea of what blocks are. SHRDLU will not have the same level of understanding, or the same amount of information about blocks as a human being does. SHRDLU can only understand certain things about blocks, and can relate these things to its world. But there will be losses in the retelling between the worlds.

I have been arguing that the context that the intelligence develops in is an important determinant to how the intelligence interacts. The context does not have to be like our world, and perhaps should not be. The information is coded differently to a computer program. We may see the block world on the screen, but to the computer program those blocks are simply the coded information that goes by that label. This may be a rather important contextual difference, it may not be. It does point out, however, that just because we see something a certain way does not mean it must be perceived that way, or that it must even be coded for that way.

Throughout this paper, I have been going under the assumption that the intelligence has some sort of body. I believe that a body is something that the mind can control to affect other things in the environment. This body has the ability to gather information from outside of itself. It can conduct actions through the body. It is distinct from the world around it. I also assume that the mind and body are one, in that without an ability to influence the outside world the intelligence would be ineffectual.

I also believe that no intelligence could be without some sort of motivation, some reason that it acts and reacts. It could be as simple as a survival principle, such as if it does not do something it will not survive. It could be as complex as a self-motivating principle, such as the wish to get the most information on some subject as it can. Emotions can be motivators, but I do not know if it would be possible to program in emotions, or if emotions develop as a reaction to certain things and get wired in by experience into likes and dislikes. But it is simple to develop a motivation, or is it?

Motivation is usually associated with emotions in humans. But what would motivate a computer program? We can program in certain directives, certain things that are to be done. But are they motivations? Perhaps they are. But an intelligent being can change motivations, in essence change its programing to a certain degree, and understand why it does certain actions. It comes with the ability of having a self-concept. Having a self-concept would seem to imply being able to reflect on oneself, and to alter behavior to adapt. But the environment may be too limiting to need this. This is perhaps the biggest stumbling-block for SHRDLU, in my opinion. The environment is so simple that there is no need for such a degree of intelligence. SHRDLU adapts, but only so far as the environment allows.

To conclude, Intelligence is a mechanism of the mind that allows for the use of information that will be useful to the bearer. It is an adaptive thing. It has methods of doing things, of thinking of and about things that have worked before in attaining goals. Intelligence is goal-directed. In other words, it is a motivated thing, and is motivational in attaining those goals. It contains memory, and can call upon that memory system for information to solve problems. It stores information in an abstract form. For intelligence to exist it must be in some sort of environment for it to interact with. This implies that it has some way to tell what is it and what is not it. It has, to put it another way, a sort of self-concept or point of view. It would be able to learn.

It does not need to have the complexity of thought that a human has, or even have the same sort of motivations. A hypothetical computer intelligence would not neccessarily think in the same way as a human being would. It would not have the same context of experience that a human has. Parallels could possibly be made for translation of experiences, but only to a point. It would be seeing our world from a different perspective, with some things perhaps not translatable in experience. Theories of Artificial Intelligence seem too caught up in making a computer think like a human. Perhaps what we should try to do is work on developing an AI that thinks and adapts to an environment rather than try to create a human in a machine. We should try more to take into account that the context that a computer intelligence would be coming from would be different than a human would, and would lead to different thought patterns. Intelligent mental activity could very well be the result of digital minds. Perhaps the human mind runs very much like a computer. But we must keep two things in mind. First, the computer intelligence is going to develop in a very different environment than our own. Second, those thoughts are going to be stored differenty than our own. We are not even sure how our minds work. Let us try to develop a computer intelligence, not a human in a computer. I suspect that a computer will be able to pass the Turing test, but it will be because it has gained some way of translating experiences, and will have information on how a human will react, not because it thinks like a human. To judge something intelligent or not intelligent just because it is like or unlike a human seems rather egocentric, and perhaps we become blind to actual intelligence.


References

Boden, Margaret A., Artificial Intelligence and Natural Man, 2nd ed. expanded, N.Y., Basic Books Inc. 1977,1987.

Boden, Margaret A. (ed.), The Philosophy of Artificial Intelligence, Oxford University Press, 1990.

Churchland, Paul M., Matter and Consciousness revised ed. Cambridge, Mass., MIT Press, 1990.

Flanagan, Owen J., The Science of the Mind, Cambridge, Mass., MIT Press, 1984.

Hofstadter, Douglas R., Dennett, Daniel C., The Mind's I, N.Y., Bantam Books, 1990.

Haugeland, John., Artificial Intelligence: The Very Idea, Cambridge, Mass., Bradford Books, 1990.

Haugeland, John.(ed), Mind Design, Cambridge, Mass. Bradford Books, 1981.

Winograd, Terry.,Flores, Fernando., Understanding Computers and Cognition: A New Foundation for Design, New York, Addison-Wesley Publishing Company, Inc., 1986,1987.

Home

Bits of my Reality

Write me about your views on AI