A finite automation is a divice that has a limited number of states and which shares similar characteristics with other identical devices. The assumption according to Dewdney is that this is probably a manufactured device. The device also possess a means of altering its state of being through an input and output source. Dewdney expresses that diagraming the possible states of transistion are tricky when dealing with finite automata.
To be totally honest, as usual, I don't understand a great deal of the Dewdney reading. One thing that seemed to make sense to me was the notion of "words" and "languages." As I understand it, in the context of theoretical computing models "languages" consist of the entire range of "words" made up of sequences of symbols that are "accepted" by the model. I also think I understand the basic concept of a change in states triggered by some "word" and represented in the book by circles with an arrow between them with inputs written above the arrow.
The most important thing I learned from the reading was from the chapter on Church's thesis; basically, that all computers do essentially the same thing. I think it's pretty amazing that Church's thesis remains relevant and applicable in today's world, considering that he came up with it in the 1930's? I guess some people are just ahead of their time. I would elaborate more, except I don't have the text with me.
I will say that the chapters on black boxes and input/output were interesting given our recent dicussions about AI. I guess I continue to struggle for a firm grasp of what a computer would have to do to constitute intelligence. If the basic function of a computer is to give a certain output to whatever the input, it seems that computers will never transcend that, even if their tasks became more complex and "human-like."
If I were to reduce the gist of these chapters into one wonderfully simple principle, I would come to the conclusion that they all say something about the important differences between analog and digital computation. Digital computation, which seems like what the RAM machine represents in comparison to the Turing Machine, which functions from reading tapes, is supposedly more robust (I learned this doing a research paper on the technology of compact discs). The final chapter on Church's Thesis (which I still don't understand exactly what it is besides "all computer's are created equal.") Even if a RAM machine is the same, essentially, as the Turing Machine, it seems like the RAM machine is more efficient. In this sense, all of the importance in the readings seems to me to be that digital modes take up a smaller amount of "programming space", and are more robust, and the Turing Machine would thus take up a larger "program space," as well as suffer more damage to it's structural integrity over time.
Another spin on this question would be to say that the most important thing that I have learned from these chapters thus far is that I do not always initially understand where computer scientists are coming from as they try to explain something to me. Turing Omnibus as a text is moreover quite odd in that, in reading it in the random fashion we seem to have adopted, the text fails to make any point about what it has explained. Why do I care about this black box, finite automata concept? I need to know in simpler terms I guess.
Also, the Chomsky Heirarchy, though I am still not clear exactly what this is, (where do these titles come from?) seems to be saying something about increasing computability by making the base language that the computer functions with compatible with other language systems so that more than one type of data can be interpreted by the machine. Other methods, context-sensitive, etc, seem more limited as far as computational ability are concerned---with computers, it always seems to be the best way to start in a small language of signs that can be easily read and interpreted, and redefined in further terms as they become necessary, rather than to design specific systems that base themselves on one or a limited scope of computational subject matter in their language. This is something like Guy Steel seemed to indicate in his talk about programming languages which we discussed earlier this semester.
Computers perform unique functions. Certain computers are more general than others. All sufficiently general computers perform different, but, in the end, relatively equal functions. The limitations that exist to computing are broad, such that, similar limitations apply to different computers and, therefore, by definition, all computers have certain limitations that are close in proximity. These limitations serve as the general limitations on all computers. Certain computer specialization solves, gets around, or reduces specific limitations, however, often the specialization produces new, unintended limitations on computers.
I found the reading to be difficult at times, especially in interpreting the diagrams. I understood some things, though, like the black box idea. I noticed that, at the beginning of the chapter, it said that it is possible to discover what is inside the box just by analyzing the inputs and outputs, not by looking inside the box. But then at the end of the chapter, it says that "the language accepted by a finite automaton can always be written as a "regular expression" ... however, for every regular expression there are an infinite number of automata which accept that language. So even when we know precisely the language accepted by an automaton, we have to crack open the box and inspect its state-transition diagram, so to speak, if we wish to know exactly which device inhabits it." I wondering what accounted for this contradiction.
In Church's Thesis, I noted two things: 1) his thesis: "all (sufficiently general) computers are created equal" which "therefore puts a seemingly natural limit on what computers can do." 2) Evidence for Church's Thesis includes proving that " a Turing machine can carry out any computation that a RAM can." I did not see how this chapter related to Chapter 2 and 7, though.
I didn't learn very much from these particular chapters. Usually I do, but tonight my concentration has been stolen by "The Sims," a game I've known was addictive for months but my friend Nick got it and I can't stop watching him play. I JUST CAN'T!!!!!
Anyway, I did learn some things. I learned first that this particular set of chapters shouldn't be read in reverse order: bad idea. The most important thing I learned in this particular selection, though, is that even with computers the idea that you can, in a sense, take it apart and learn what's going on just like anything else, only by messing with electronic signals instead of actually taking it apart. (Chapter 2) I think the first paragraph really tells you what I learned; I must avoid this game at all costs or my life will be sucked away forever.
To be honest, I can't say I learned a lot from the reading, other than that I don't understand the type of machine that was being discussed. To start with, I do not see the use of a "black box" machine, or how it really functions. Does it have pre-programmed circuits inside that recognize certain binary sequences? And if so, why? What purpose does such a machine serve? As for the other types of machines, I don't understand what it is they are actually doing. There is a tape that the computer reads, but what is reading it? How does it transfer information from one tape to another? And again, why? I have never fully understood the original function and purposes of early computers, nor could I understand the bridge between simple tape-reading machines that blink lights and what we have now. Maybe in class we could talk about the nuts and bolts of these machines instead of the theory behind them? I feel like I would understand the theory more if I knew what the computer was doing and why.
Uhh... Well, if I've gathered something important, it may be a clearer understanding of how a Turing machine works. Seeing the breakdown of computing languages to the basic level inspires alot of questions and curiosities regarding the ways that these languages materialize in computers today. I'm actually not that clear on much. The concept of finite computing is definitely new, and I'm just having some trouble visualizing the ways that these concepts are applied. But, I'm sure it'll all be clearer in class tomorrow.
Disclaimer Often, these pages were created "on the fly" with little, if any, proofreading. Any or all of the information on the pages may be incorrect. Please contact me if you notice errors.
This page may be found at http://www.math.grin.edu/~rebelsky/Courses/CS105/2000S/Questions/question.41.html
Source text last modified Wed Feb 16 08:16:10 2000.
This page generated on Mon Apr 17 09:45:29 2000 by Siteweaver. Validate this page's HTML.
Contact our webmaster at firstname.lastname@example.org