Well, I would have imagined that in order for a program to be "correct," it would naturally be error-free. However, it has become apparent that is not always the case. Because of the difficulty in the way programs are structured, it almost impossible to eliminate bugs in a computer. As a result, people must suffice to label a program "correct" if there are no major malfunctions. So, even though we know that a program may not be perfect, we consider it "correct" because it works.
When I think about a program being "correct," I generally think about it doing what it is supposed to do in "reasonable" circumstances (proper parameters for its use, i.e. memory, input type, display, etc.). It seems as if saying a program is "correct" means something a little more specific in the computer science world: not just that it seems to work, but that it can be _proved_ to work some agreed-upon percentage of the time or uses under the "reasonable" circumstances outlined above.
i have no idea, because it seems that no program is "correct!" i would never say that! given the dewdney reading last night though, i would guess that a correct program would be one which calculates or does whatever it's supposed to do. that is, that the algorithm, or formula used to arrive at your desired result is mathematically or otherwise "correct." that's all i can think of. you would prove this by checking the formula, since it's impossible to check every possibility.
A program is correct when: 1. Assertions are made about what a program should accomplish during the course of its operation. 2. Assertions are proved by reasoning and mathematical analysis. 3. Bugs are identified when assertions fail under scrutiny, and then a program may be fixed, and proved correct. "Correct" implies a total process of analysis has occured and a program now meets all the assertions it sets out to. Normal proofs basically show that a program "terminates for all inputs of interest." (Dewdney, 68) Assertions are required of most program loops(closed)within programs as well, in order for it to be considered proved "correct." It might be impossible to try all states of a variable, but some attempt must be made to show that for values (inputs) of interest, a reliable result can be reached.
A "correct" program is one which, via inductive reasoning and mathematical analysis, has been proven to work/run properly. Using proofs to test assertions at various stages of a program's operation, one can identify what parts of a program are running correctly, and what parts needs to be debugged.
A program is correct if, the output of the program equals the correct output. The correct output must be determined, independently of the program, by a proof. The proof, the program is compared to, must be true; it must with stand the test of hyperbolic doubt. For a program to be true beyond hyperbolic doubt, the program must undergo an exhaustive test. An exhaustive test is impossible because there are too many variables.
The proof must predict the same value that the program supplies in all instances. Lowering skepticism, a program can be considered "correct" if, the program supplies an output equal to the output of the proof. The program and proof should be manipulated and tested to determine that the program supplies the correct answer in many circumstances increasing the chance that the program is correct.
in my answer.
A program is correct when it is proved mathmatically to be correct. It is the ultimate in debugging - at least if the proof is correct. Dewdney describes how it works: "One makes certain assertions about what a program should have accomplished at various stages of its operation. Assertions are proved by inductive reasoning, sometimes supported by additional mathematical analysis." Dewdney also states that even if the proof is incorrect, it may point out some logic errors in the program, and just the attempt to make a proof is very useful and valuable. Finally, he says that in making proofs of correctness, "one also proves that a program terminates for all inputs of interest," meaning that some variables will eventually go to zero, for example.
Well, Dewdney wasn't as helpful on this as I thought he'd be, which is probably why this question was raised. When I think of a program's being correct I think of it solving correctly the algorithms within it every time. To me, this correctness means the program is error-free, though not necessarily bug free. If every 100th time you find that Force = mass * velocity instead of acceleration as it should be, then you have a problem. The algorithm must always work, and must be proved to be as such. Why can't Dewdney just say something clearly? Is it really that difficult?
Correctness of a program is pretty much just that: Does the program function correctly? Does it execute its algorithms the right way? This idea can go beyond the specific functioning of mathematical algorithms to a program as a whole. If it is not really buggy, or accomplishes its goals without a lot of wasted energy and comes up with right answers, then a program is correct.
By deeming a program correct, we mean that it is running the way that it should be at the current time. A program is shown to be correct by using proofs to evaluate their performance and accuracy. Once a correctness proof is used, a program will benefit and be improved, even if the outcome of the proof is negative. "Correct" seems to include notions of efficiency, accuracy, and glitch-free running.
Disclaimer Often, these pages were created "on the fly" with little, if any, proofreading. Any or all of the information on the pages may be incorrect. Please contact me if you notice errors.
This page may be found at http://www.math.grin.edu/~rebelsky/Courses/CS105/2000S/Questions/question.30.html
Source text last modified Wed Feb 16 08:16:04 2000.
This page generated on Mon Apr 10 09:26:01 2000 by Siteweaver. Validate this page's HTML.
Contact our webmaster at email@example.com