In my opinion, in order for me to classify a program as "intelligent," it must be able to do a variety of things that represent "human intelligence." Using human intelligence as my guide lines, the ability to learn from mistakes and experiences is an important criteria. Also, the ability to make moral judgements/desicions would be necessary.
That's a tough one, and I can't say I've totally thought it through yet, so I'm not sure I can answer it, but I'll try. At first, the whole it-has-to-be-human to fit our version of "intelligent" (essentially eliminating programs and machines from the running) seemed pretty convincing, but it seems a little elitist, not so much towards programs we write or machines we construct, but towards other life forms (known and yet to be discovered). I think that 'intelligent' cannot simply refer to 'human-like,' because we call other people's intelligence into question all of the time without questioning their humanity. It seems as if we have essentially different conceptions of intelligence with regards to humans versus programs and machines. Possessing human qualities is not enough to make a _person_ 'intelligent,' but because it seems nearly impossible to create a computer with these qualities, many people label them 'intelligence' in a computing context. I think there's something to be said for the claim that we continually shift the definition of 'intelligence' to whatever we think a non-human couldn't possibly do. Personally, I have a kind of minimal definition of intelligence, at least when it comes to programs. If a program can do things as simple as responding to input, calculations, sorting lists, etc., well, in many ways that seems pretty intelligent to me. We teach children how to do these things and call them intelligent when they learn, so why not a program?
After reading Forrester's chapter on the matter, I'm hard pressed to come up with any criteria that would deem a program "intelligent" because I was very persuaded by the various arguments that such a program would be impossible to create. First, there is our ambiguous and highly subjective definition of intelligence. Then there is the problem of defining intelligence by our human centered perceptions. Lastly there is the problem of understanding human intelligence at all. How can you replicate something you don't understand? Also the numerous failures thus far suggest to me that we are nowhere near these ambiguous and lofty goals, so I'm not really worried about it.
A program would be considered intelligent if it met the following criteria: 1. The program accomplishes what it is supposed to accomplish, providing whatever results which are necessary in an accurate fashion.
2. The program does this in the most efficient way possible. Moreover, the program must
3. make appropriate 'decisions' where they are required on the basis of inputed data, and the decisions or evaluations are made in such a way that unneccessary processing steps are not used.
Some criteria for determining the intelligence of a program might be whether or not it has the capacity to problem-solve/think somewhat independently of human input, to perform various functions without constant human guidance or oversight, to learn how to perform tasks (including the physical manipulation of objects), and perhaps to emulate human responses to various stimuli (audio, visual, tactile). At a very basic level, it would seem that an "intelligent" program would be one capable, to some extent, of thinking, responding, and functioning as a human being would.
To determine the criteria for intelligence, we first must be certain of "intelligence". Frequently, especially with respect to AI, definitions of intelligence over-emphasize the importance of formal symbolic manipulation. John Searle discusses this more in depth than I do below. Certain definitions of intelligence are compatible with AI containing an intelligent quality.
Intelligence is knowing or understanding, not a reaction, in a rational manner. AI, and computers generally, receive a series of inputs and correspond with output. A calculator receives 'one' 'plus' 'two' 'equals' and responds with 'three'. Does the calculator respond with the same answer a human would respond with? Yes. Does an equivalent response by a calculator, or AI or computer, when compared to the human response determine the intelligence of the machine? No. It is necessary for the machine to return the "correct" answer to have intelligence, however, it is not sufficient.
Intelligence involves the intentionally. Searle states, "Because the formal symbol manipulation by themselves don't have any intentionally, they are quite meaningless; they aren't even symbol manipulation, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionally...is solely in the minds of those who program...." Machines expand the intentionally of humans but are unable to possess intentionally. The machine knows how to function but does not understand the function it operates; the causal relationships between the variables are beyond the comprehension of the machine. The machine can operate on the single level of responding to inputs. Insofar as the machines responds to inputs, the machine possess intelligence, however, the entire process including relationships and warrants behind the process are outside the purview of the machine.
Humans possess the knowledge of these relationship. The relationship are part of the human set of beliefs. If intelligence only included formal symbolic manipulation the machine would be intelligence. However, when a human feels threatened and responds with a feeling of anxiety, is that not an intelligence reaction? Machines would be able to do the same, however, react according to symbols.
The innate qualities of humans are part of intelligence that machines, currently, have been unable to incorporate. My criterion for judging if a program as intelligent is if the program contains a rational conscious. The rational conscious includes both accuracy and intentionally. Intentionally is the proof of accuracy.
I would say that the key criteria for a program (or anything) to be "intelligent" is the ability to learn. By their nature, programs are good at remembering things, but the ability to learn is much more than that. Learning involves taking information and experience that one already has and making educated inferences to acquire more knowledge. Dictionary.com defines "learn" as "to gain knowledge, comprehension, or mastery of through experience or study." Dictionary.com defines intelligence as "the capacity to acquire and apply knowledge and the faculty of thought and reason." I think that the only way to acquire and apply knowledge is to learn it (unless it is pre-programmed), and in that case the difference is that learning involves the faculties of thought and reason.
I also think awareness is important, but I think that relates more to consciousness and being alive. And although all intelligent things are conscious, I don't necessarily think that someone or something has to be conscious to be intelligent.
I think of intelligence more than anything as the ability to learn and not only learn, but to be able to apply original techniques to a problem. I think most humans are capable of at least that much, and many animals as well. As a result of this definition I've given to intelligence, I think that good criteria to judge programs as intelligent involve something similar: the ability to learn. I suppose my ideal intelligent machine would exhibit some form of ability to incorporate something completely new into its program, a program which doesn't search for methods within its memory but rather thinks of a solution that would be best and uses it. An intelligent computer program of this type must be self-expandable. It must be able to write sub-routines and sub-algorithms for finishing problems, and must also be able to evolve; learn from its mistakes and not repeat them. I also think a truly intelligent machine would be able to pass the Turing test in a different way--you ask it a question about emotions, and it responds with an answer similar to that of a human with emotions, or, even better, is able to exhibit emotions all on its own.
Disclaimer Often, these pages were created "on the fly" with little, if any, proofreading. Any or all of the information on the pages may be incorrect. Please contact me if you notice errors.
This page may be found at http://www.math.grin.edu/~rebelsky/Courses/CS105/2000S/Questions/question.33.html
Source text last modified Wed Feb 16 08:16:05 2000.
This page generated on Tue Apr 11 09:13:10 2000 by Siteweaver. Validate this page's HTML.
Contact our webmaster at firstname.lastname@example.org