I would not say that mimicking the physical aspects is not a fruitful venture. Due to the uncertainty of AI, there can never be any certainty to which is in fact a fruitful approach or just a shot in the dark. Besides, science recommends that if you want to understand something, you observe, make a hypothesis, then cut it open and find out how it really works(experiment). That is the reason that we can put a baboon's heart inside a human ( i think it was a baboon). When there was the desire to put an artificial heart in a human, the creators of the artificial heart did not create it the way they thought it worked, they looked at a human heart and mimicked the physical aspects of it. This works perfectly for most things, but it seems to me that this will not work for simulating the mind.
From what I gathered in the beginning of the year, a computer already works very similar to a neural network. Input signals are transmitted through some mode (the keyboard). At which point, the signals travel through a series of layers and branch off to different locations(ram, hard drive, CPU, etc..) depending on the input. Finally, the signals are relayed to the output(screen), in which case the task has been completed. The actual process within a computer is probably a bit more complicated, but not nearly as complicated as the mind.
It does not seem to me that this approach would be anymore successful than trying to mimic the way we think we think. In either case, we still do not know how the mind truly works. We know that the brain sends and received the signals, but even if we do create something that is capable of replicating the physical aspects of the brain that does not mean that it will work in the same manner as the brain much less be "intelligent."
From what I know so far, which is pretty little, I think that neural networks might be a promising road towards AI. Of course, we still have that definitional problem about what "AI" really is. I don't think neural networks are going to start "acting like people" very soon, but their capacity for "learning" seems to fulfill one of the things that we thought might be part of intelligence, so that, at least, is something. Nevertheless, while expert systems, neural networks, and little robotic dogs definitely exist, I'm not sure I'd call them "intelligent," artificially or otherwise. This might prove to be a "fruitful road to AI" in another few millennia, but as far as the near future goes, it feels as if neural networks are just another tiny piece of a tiny puzzle that is a tiny piece of another puzzle that is a piece of another puzzle...and we really have no idea what the "final" result will be.
The neural networks chapter was really quite interesting, however, I can't say that I anticipate it being a "fruitful road to AI" simply because you run into the same problem of trying to mimic something we don't yet fully understand. As far as I can tell, and I don't claim to have a very in-depth understanding of biology or physiology, our comprehension of how neurons fire is purely hypothetical. There is no way we can "prove" it. We assume it to be right until we find something that that theory doesn't account for, at which point we reexamine and adjust accordingly. I don't see how a successful model of AI could be built on that unstable of a foundation.
My initial reaction to the readings on this subject is that this road will not be tremendously fruitful. The study of the Perceptrons seemed to indicate that they were not reliable and had to be carefully monitored to produce results. All of the information on the neural network is beyond confusing to me, it doesn't seem fully understood by those working on the project, or at least according to my understanding from Dewdney. In this case, I can only consider this method a shot in the dark. I think that it would be very difficult for a neural network computer to reach AI (which is defined as "a computer that learns on its own" I suppose, or AI could mean that a computer version of human thought processes) because a computer does not process chemicals, and I think that some of the mystery of our thought processes is not fully understood in brain chemistry terms. It may also fail to be fruitful because it seems like it would require massive funding to become widespread, and this funding would come only from sources invested in it's outcome, which I feel must be currently lacking due to the inability to see useful results in the near future.
It seems that the neural networking approach has certainly made some progress (networks have demonstrated the ability to learn, etc.), but it doesn't sound like initial simulations have exactly been rave-worthy. While it would be amazing if computer scientists could develop a comprehensively functional system based on neural networks that accurately mimiced the brain, I am skeptical that this is really possible. Essentially, any machine's ability to "mimic" physical aspects of the brain comes down to how it is programmed by individual human beings. And given that we ourselves don't understand exactly how the neural networks of the human brain work, I don't know that we'd be able to really replicate such networks within a machine. Even if we _did_ understand all the intricacies of neural networking within the human brain, I suppose that at base, I have difficulty believing that it is possible to truly or accurately, particularly via a programmed piece of machinery, recreate human biological processes.
First, is the human mind the mechanism that intelligence should model? Humans tend to have a bias in this field. Humans, by viewing themselves as the center-of-the-universe, are reluctant to consider alternatives to the human mind as greater than the human mind. However, the human mind may be the greatest intellectual mechanism known to (wo)mankind, thus, research expanding upon the best model may be fruitful.
The Pacific Northwest National Laboratory (http://www.emsl.pnl.gov:2080/proj/neuron/neural/what.html) states: They are good pattern recognition engines and robust classifiers, with the ability to generalize in making decisions about imprecise input data. They offer ideal solutions to a variety of classification problems such as speech, character and signal recognition, as well as functional prediction and system modeling where the physical processes are not understood or are highly complex. ANNs may also be applied to control problems, where the input variables are measurements used to drive an output actuator, and the network learns the control function.
If the goal of artificial intelligence is only deductive reasoning, then neural networks can lead the way to Artificial Intelligence. Neural networks can go beyond human cognition by applying the summation of human knowledge on a subject when problem-solving and excelling at solving-problems on issues that people are good at solving, however, traditional methods are inadequate. According to http://www-personal.usyd.edu.au/~desm/afc-ann.html, neural networks are most effective and currently practical when classifying data and from a set of inputs forecasting and modeling. In addition, Neural networks are beneficial in comparison to alternatives: 1. They deal with the non-linearities in the world in which we live. 2. They handle noisy or missing data. 3. They create their own relationship amongst information - no equations! 4. They can work with large numbers of variables or parameters. 5. They provide general solutions with good predictive accuracy.
Neural networks are limited by the parameters assigned to the algorithms. Inductive logic is beyond the purview of neural networks. If artificial intelligence is to extend beyond the limits of human knowledge and neural networks are limited to human knowledge, then neural networks can be more efficient and accurate in applying human derived knowledge, however, will never go beyond the limits of human knowledge.
I think mimicing the way we think is a fruitful road to AI, at least to begin with. I don't know all of the latest news and knowledge on this subject, but I think it is really the only feasible option we have at this point. I think we can gain a lot from it too - in learning more about how our brain works and in trying to create or just simulate intelligence in a computer/machine. I think AI will go a long way using neural networks to developing an "intelligent" machine, and it may eventually lead to the discovery of better approach to AI, but for now I think it is a good and useful one.
I suspect that neural networks will be a very, very fruitful road to artificial intelligence. Being a potential Psych major, I know enough about psychology to have a heartfelt belief that all functions of the brain go back to the cells within our brain and how they interact. I fear, though, that the implementation of AI will rely heavily on these networks of artificial neurons and the programs involved won't think much about how we think--I believe that the ultimate path to AI will be found after having traveled both routes at once--a neural network and human-like programming approach to the solution. It's been my experience in life that things very rarely occur in "black and white;" rather, compromises must be made and few things happen on just one side of the coin. I think narrow-mindedness is the enemy of everyone!!
Maybe I'm too much of a sentimentalist. While I do think it's possible to map out the way the brain works, I don't think we can replicate human thought by replicating the physical structure. There must be more to it than that. I don't know what. I don't often think about what the true, deep, philosophical meanings behind "thought" are. I think will be possible to create a sort of thought based on the physical structure of the brain. I am dubious of how close to human thought it might be able to get, though. Maybe I am also a bit too paranoid, too. I think the idea of replicating human thought is something that we are likely to jump into too early. Just like the whole cloning thing. I see too many people jumping on the "let's clone a mammoth" bandwagon to make me comfortable. Unfortunately (at least, in my opinion), the people who have the technology and money are not always the ones with the best intentions or understanding of what they are doing. Using genetic engineering to re-grow lost limbs is useful. Cloning long-extinct animals is not something I see as a necessity. Likewise, creating better, faster, more usable friendly - and, therefore, more sentient - technology is good. I don't know if making human thought mechanical is.
I think that the question posed here is really interesting. It is sort of a question of ends and means, which I guess has been an issue central to our discussion of AI. Attempts to duplicate the way that the human brain functions in terms of physiology may be the most fruitful path to AI. From our class discussions, it is clear that we are very concerned with the means by which "intelligent" ends are reached. In fact, we have seemed pretty much in agreement that it takes a method that can be proven to be intelligent to render the ends as legitimate. And since we constantly find ourselves using "human centric" means of comparison, a neural model that most resembles the human process may be the only way to establish true AI. So hypothetically, this is a fruitful path to AI, but the realistic difficulty of it may prove the fruit to be rotten.
Disclaimer Often, these pages were created "on the fly" with little, if any, proofreading. Any or all of the information on the pages may be incorrect. Please contact me if you notice errors.
This page may be found at http://www.math.grin.edu/~rebelsky/Courses/CS105/2000S/Questions/question.37.html
Source text last modified Wed Feb 16 08:16:08 2000.
This page generated on Mon Apr 10 10:01:33 2000 by Siteweaver. Validate this page's HTML.
Contact our webmaster at firstname.lastname@example.org