[Instructions] [Search] [Current] [Syllabus] [Links] [Handouts] [Outlines] [Assignments] [Labs]
This page may be found online at
http://www.math.grin.edu/~rebelsky/Courses/CS302/99S/Handouts/exam.03.html
.
Distributed: Friday, April 23, 1999
Due: Start of class, Friday, April 30, 1999
No extensions!
Warning! This answer key is still under development. Some parts are missing, and some are partially incorrect.
As we saw in our discussion of types, we decided that it is possible to define integers with three basic operations:
zero
(the basic value);
succ
(add 1);
pred
(subtract 1).
In this notation, 3 would be written succ(succ(succ(zero)))
and -2 would be written pred(pred(zero))
.
In working with this notation, it may help to have techniques that allow you to convert from this form to ``standard'' integers and back again.
/* * toInt(Num,Int) * Converts a number in the pred/succ form to an integer. * Does not like backtracking. */ toInt(zero,0). toInt(succ(Num), I) :- toInt(Num,Part), I is Part + 1. toInt(pred(Num), I) :- toInt(Num,Part), I is Part - 1. /** * fromInt(Int,Num). * Converts an integer to a number in pred/succ form. * Can be used to generate numbers in the succ/pred form. * Does not like backtracking. */ fromInt(0,zero). fromInt(I, pred(Num)) :- I < 0, J is I+1, fromInt(J, Num). fromInt(I, succ(Num)) :- I > 0, J is I-1, fromInt(J, Num).
For example,
?- toInt(pred(pred(zero)), X). X=-2 ?- toInt(succ(succ(succ(zero))), X). X=3 ?- toInt(succ(pred(zero)), X). X=0
Note that these are intended as generative predicates, but that they're not guaranteed to work successfully if you backtrack through them.
In each of the following, you are asked to write a predicate that
manipulates values written using the pred/succ
notation.
Make sure that you include a few examples that show how your predicates
work (or fail to work).
equals
predicate
Write an appropriate equals(X,Y)
that holds if X and
Y are equal, even if they don't have the same form. That is,
pred(pred(succ(zero)))
should be the same as
pred(zero)
.
You may not use fromInt
or toInt
I've found it most convenient to use the answer to part B (canonical form) in developing an answer to this question. Two things are equal if their canonical forms are identical. We might write this as:
equal(X,Y) :- canonical(X,Canon), canonical(Y,Canon).
Wyatt came up with a particularly elegant solution that we will discuss in class.
We might say that one of these integers is in ``canonical form'' if
does not have both pred
and succ
. Write
a predicate, canonical(X,C)
that holds if C
is the canonical form of X
. Your predicate should also
be able to generate the canonical form of any number.
You may not use fromInt
or toInt
Here are a number of possible ``interesting'' test cases for your
(and my) canonical
predicate.
Simple or Base Cases
canonical(zero,zero). canonical(pred(zero),pred(zero)). canonical(succ(zero),succ(zero)). canonical(pred(succ(zero)), zero). canonical(succ(pred(zero)), zero).
Expected failures
canonical(zero,pred(X)). canonical(pred(zero),succ(X)). canonical(pred(zero),zero). canonical(succ(zero),pred(X)). canonical(zero,pred(succ(zero))). canonical(zero,pred(succ(zero))).
Simple Nesting
canonical(pred(pred(succ(zero)), pred(zero)). canonical(succ(succ(pred(zero)), succ(zero)). canonical(pred(pred(succ(pred(zero)))), pred(pred(zero))).
Multiple Nesting
canonical(pred(pred(succ(succ(zero)))), zero). canonical(pred(pred(succ(succ(succ(zero))))), succ(zero)).
Generation of Canonical Forms
I've noted that while some of you can verify that something is canonical, you may generate other non-canonical forms. (I'll admit that my first solution did, too.)
canonical(zero,X). canonical(pred(zero),X). canonical(succ(zero),X). canonical(pred(succ(zero)), X). canonical(succ(pred(zero)), X). canonical(pred(pred(succ(zero)), X). canonical(succ(succ(pred(zero)), X). canonical(pred(pred(succ(pred(zero)))), X). canonical(pred(pred(succ(succ(zero)))), X). canonical(pred(pred(succ(succ(succ(zero))))), X).
Generation of Other Forms
While canonical is clearly designed as a one-way predicate (it is designed to generate canonical forms), it might also be used to generate a restricted set of noncanonical forms.
canonical(X,zero). canonical(X,pred(zero)). canonical(X,succ(zero)).
We begin with the basic cases. zero
is in
canonical form (or, more precisely, zero
is the
canonical form of zero
.
canonical(zero,zero).
Similarly, numbers with only one operation are in canonical form.
canonical(pred(zero),pred(zero)). canonical(succ(zero),succ(zero)).
Now we turn to things that aren't in canonical form, and consider
how to put them in canonical form. What do we know about the
canonical form of pred(succ(X))
? We know that it's
the same as the canonical form of X
. The same is true
if we have succ(pred(X))
.
canonical(pred(succ(X)), Canon) :- canonical(X, Canon). canonical(succ(pred(X)), Canon) :- canonical(X, Canon).
Are we done? Not yet. We haven't figured out what to do with
nested succ
s and pred
s. For example,
we might have succ(succ(pred(zero)))
. This doesn't
match the left term in any of the rules we have developed for
canonical
. As some of you noted, we can't even tell
the ``sign'' by looking at the outermost operator. For
example, succ(succ(pred(pred(pred(zero)))))
is
negative, even though the outermost operator is ``add 1''.
From a practical standpoint, we need rules that match two
nested succ
s and two nested pred
s.
We've then found every possible kind of expression, and given a
rule for using it.
What do we know about the canonical form of pred(pred(X))
?
Suppose we knew the canonical form of pred(X)
, which
we'll call SubCanon. We know that pred(pred(X))
is the same is pred(SubCanon)
, and the latter is closer
to being canonical. Is it canonical? Not necessarily. However,
if we find the canonical form of pred(SubCanon)
, then
we're done.
canonical(pred(pred(X)), Canon) :- canonical(pred(X), SubCanon), canonical(pred(SubCanon), Canon). canonical(succ(succ(X)), Canon) :- canonical(succ(X), SubCanon), canonical(succ(SubCanon), Canon).
Unfortunately, this solution does not work if the thing on the left
is already in canonical form. Consiser pred(pred(zero))
.
We recurse on pred(zero)
and find that it's canonical.
Then we try to find the canonical form of pred(pred(zero))
.
But that's what we started out trying to find.
The solution is then to divide it into cases based on the presumed canonical form, which can be zero, negative, or positive.
canonical(pred(pred(X)), zero) :- canonical(pred(X), SubCanon), canonical(pred(SubCanon), zero). canonical(pred(pred(X)), pred(Y)) :- canonical(pred(X), Y), canonNonpositive(Y). canonical(pred(pred(X)), succ(Y)) :- canonical(pred(X), SubCanon), canonical(pred(SubCanon), succ(Y)). canonical(succ(succ(X)), zero) :- canonical(succ(X), SubCanon), canonical(succ(SubCanon), zero). canonical(succ(succ(X)), pred(Y)) :- canonical(succ(X), SubCanon), canonical(succ(SubCanon), pred(Y)). canonical(succ(succ(X)), succ(Y)) :- canonical(succ(X), Y), canonNonnegative(Y).
As you see, this uses two simple helper functions.
canonnonpositive(zero). canonnonpositive(pred(X)) :- canonnonpositive(X). canonnonnegative(zero). canonnonnegative(succ(X)) :- canonnonnegative(X).
Unfortunately, this doesn't generate the canonical form in every case.
Write an appropriate add(Operand1,Operand2,Result)
predicate
that holds if Operand1
and Operand2
sum to
Result
.
You may not use fromInt
or toInt
Once again, we'll start with the base case. When you add zero to anything, you get something equal to that thing.
add(zero,Y,Result) :- equal(Y,Result).
The next case is easiest to consider as a mathematical formula: (X+1) + Y = 1 + (X+Y).
add(succ(X), Y, Result) :- add(X, Y, SubResult), equal(succ(SubResult), Result).
Similarly, (X-1)+Y = (X+Y)-1.
add(pred(X), Y, Result) :- add(X, Y, SubResult), equal(pred(SubResult, Result).
Note that you only have two do two of B, C, and D. If you do all three correctly, you will receive some modicum of extra credit.
In many languages, such as Scheme, it is important to have balanced parentheses. That is, every opening paren has exactly one closing paren, every closing paren has exactly one opening paren, and the opening paren in each pair precedes the closing paren.
One might note that
One might write this in grammar form as
S ::= // empty string; no parens | '(' S ')' // parentheses around correct string | S S // two correct strings
Unfortunately, this grammar is ambiguous. Find a string which has at least two parse trees and draw those trees.
In the following e
is a shorthand for empty.
One simple ambiguous string, ()
includes an
extra empty in one derivation, but not the other
(after all, it's doesn't appear in the string, so we don't need to
generate it).
S / \ S | / | \ | | S | S | | | | ( e ) e | | | | S | \ | / S
A more complicated version has multiple empty strings confusing the matter. In this case, we do ``match'' all of the empty strings.
S / \ / S / / \ / S \ | / | \ | S | S | S | | | | | e ( e ) e | | | | | S | S | S | \ | / | \ S / \ / / S / \ / S
Then rewrite this grammar unambiguously. Indicate which strategy you used to select among parse trees for strings with multiple parse trees.
There are clearly a host of ambiguity problems with this grammar.
The most grevious problem comes from being able to concatenate
two strings (using SS
), either of which may be empty.
What should we do? Don't allow an empty string to be concatenated
on the left. (This is a fairly arbitrary design decision; it
doesn't affect the language, and it meets our normal standard of
grouping to the left.) This means that instead of
S ::= S S
we'll use
S ::= '(' S ')' S
Note that his simply requires that the first S
be parenthesized, because the only kind of nonempty S you can have is
a parenthesized S.
Is that enough? It certainly solves the second problem above, since we don't allow extra emptys at the front. However, the first tree is still ambiguous, and for a similar reason (nothing forces us to match the extra epsilon).
S /| |\ / | | \ | S | S | | | | ( e ) e | | | | S | \ | / S
The problem we have now is that both
S ::= '(' S ')' S
and
S ::= '(' S ')'
are quite similar.
What do we do? Since S
can still be empty, the second
is just a special case of the first in which S
derives
empty. Hence, we don't need the second.
Our final grammar is
S ::= // Empty | '(' S ')' S // Match and concat
In some variants of LISP (real or imagined), a square right bracket
closes ``the appropriate number'' of left parens, but must close at
least one. For example, ((]
is legal because the square
bracket can close both left parens. Similarly, ((])
is
legal, because the right bracket can close one left paren.
Write a grammar for this language.
We begin with the original grammar.
S ::= // Empty | '(' S ')' S // Match and concat
Next, we note that one left paren can be closed by a right bracket. What goes between the two? That's still left to be determined.
S ::= '(' X ']' S
Now, what can go between the open paren and the closing bracket?
It can be something with just matched parens (an S
).
It can be something with an extra open paren at the front, since
that's matched by the right bracket. It can have an extra open
paren in the middle or end, since that's matched by the right bracket.
Putting it all together, we get
X ::= S | '(' X | X '(' X
In other variants of LISP, a square right bracket closes all
open parens. In such a language, ((])
is illegal, because
the right bracket closes both left parens, leaving nothing for the final
right paren to close.
Write a grammar for this language.
In this instance, we have to limit what can go between two matching
parens. In particular, no square bracket can go between matching
parens. We'll use M
for matching parens.
S ::= M M ::= | '(' M ')' M
Now, what else can we have? We can have an open paren, followed by some stuff, followed by a close brace.
S ::= '(' X ']' S
What can go in the middle? Things with extra left parens. Matched strings. Combinations thereof.
X ::= '(' X | S | X X
In yet other variants of LISP, you can use both parens and brackets. A
right bracket closes all open parens up to the corresponding left
bracket. For example, ([((])
is legal in this language.
Write a grammar for this language.
Once again, we can begin with parenthesized thingys.
S ::= | '(' S ')' S
Now, we add an entry for things with braces.
S ::= '[' X ']' S
What can go between braces? Something matched. Something with extra left parens. Some combination thereof.
X ::= S | '(' X | X X
From the perspective of grammar designer, which of the versions of the square bracket grammar was easier to write?
I'll admit that I didn't find significant differences between the three (although I was able to build on previous work as I went along). The second grammar had the interesting facet that I needed to worry more about what went between parens, which may have made it the hardest to handle, but that's because I tend to think recursively.
Suppose we have been asked to design a language in which we would like to include a record type (in effect, an ordered product type) and assignment between record variables. To begin such a design, we need to consider some of the relationships between records.
We begin by considering three related types ---
ab
,
abc
, and
abd
--- defined as
type ab = record a: real; b: int; end; abc = record a: real; b: int; c: int; end; abd = record a: real; b: int; d: int; end; cab = record c: int; a: real; b: int; end;
We also define three variables using these types
var alpha: ab; beta: abc; gamma: abd; delta: cab;
Give a short argument both for and against each of the following assignments:
In looking at answers to these questions, we should probably begin by considering some principles that we might (or might not) apply. Note that many of these are mutually (and possibly even self-) contradictory.
To begin with, there is an important question of whether it is every appropriate to assign a variable of one type to a variable of another type. After all, isn't the whole purpose of typing to prevent such misuses? An implication of the question is that we're trying to design a language in which some such assignments are reasonable, because we can successfully argue that such assignments are reasonable.
As we've seen in many discussions, some language designs have less to do with logical elegance than ease or efficiency of implementation. In some cases, we can argue for or against a particular choice based on implementation details.
A reasonable principle for assignment might be that ``programmers tend to use the same names for the same kinds of things''. Hence, we might look at naming in our analysis of potential assignments.
We've recently been investigating object-oriented programming, particularly as it relates to types. Hence, it may be worth looking at assignment from an object-oriented perspective.
We might also reflect on record assignment as it relates to primitive assignment and coercion. A typical model (the one that permits assignment of reals to integers, but not vice vesa) is that approximation is okay, but information loss is not. (Seems contradictory, doesn't it?)
Some possible issues to consider (perhaps not for these examples if considered
separately, but rather when we consider them as a whole) are transitivity
and reflexivity. If we can assign a
to b
and
b
to c
, can we also assign a
to
c
? (Transitivity.) Similarly, if we can assign a
to
b
, should we also be able to assign b
to a
?
(Reflexivity.) In most cases, we will choose to support transitivity, but will not
consider reflexivity as a particularly motivating force.
Something underlying every answer is a presumed implementation. Many of us assume that record assignment is equivalent to ``copy fields''. Couldn't it also mean ``change reference'', as it does in Java?
A final thing to consider is motivation. Why would we allow record assignment when we can simulate record assignment by field assignment? The main reason is to simplify the job of the programmer. One of you defined a similar concept of noninterference: let programmers write what they want, and do whatever seems most appropriate. This would argue for many of the assignments.
I'll admit that my own personal bias (which comes from working with languages like Perl or HTML) is that ``as long as the names match, assignment should be okay''. In terms of loss-of-information, I believe that some information is lost in many assignments that are typically permitted, so I look at other issues (such as similarity to subclassing).
But you could tell that from part B :-).
alpha = beta
I was surprised that none of you looked at this in terms of inheritance. In
effect, abc
is a subclass of ab
. We allow
assignment of variables in subclasses to variables of superclases, so that gives
a motivation for doing this assignment. It may also suggest a different meaning.
Rather than thinking of record assignment as assigning fields, we can think of it
as assigning objects.
beta = alpha
A number of you seemed to have trouble thinking about this one. What happens
to the c
in beta
after the assignment? It is up to
us, as language designers, to specify that. I was hoping to see more answers
acknowledge this. Unfortunately, I did not.
beta = delta
For this one, a number of you seem to treat ordering with great reverence. Ordering of fields may have more to deal with ease of implementation than anything else. In fact, in some programming languages, the fields of a record are little more than the indicies in a dictionary ("dictionary" is the general term for structure like hash tables and association lists).
The following two are optional and may be done for extra credit.
beta = gamma
alpha = delta
Suppose that our language does not include recursive types (but records
can have records as arguments). Write a type-checking routine for a
language that accepts only the assignment v1=v2
(where
v1
is of type T1
and v2
is of type
T2
) if T2
contains all of the fields of
T1
. (Note that T2
can contain additional fields.)
This rule allows alpha=beta
, alpha=delta
,
and beta=delta
.
Write your routine in a reasonable pseudocode. You may assume a reasonable set of operations for getting information about types and fields.
Many of you seemed to confuse assignability for equivalence.
Assignability is unidirectional. That is, the questions of whether we can assign
a
to b
and whether we can assign b
to
a
must be answered separately. Equivalence is bidirectional. That
is, if a
is equivalent to b
, then b
is
equivalent to a
(and vice versa).
Some of you forgot to check primitive types. Any assignabiity algorithm should check all possible asspects. Some of you also forgot to recurse. I did tell you that we could have records as fields of records, which suggests that I was expecting some form of recursion.
/** * Determine if a variable of type T2 can be assigned to a variable of type * T1, using the metric of "if all fields in T1 have a field in T2 that * has the same name, and can be copied to T1, then permit the assignment; * otherwise, disallow the assignment". We are working in a language that * only has simple types and record types (so if a type is not simple, it * is a record). While records may have other records as fields, it is not * possible to have a recursively-defined record. * * Assumes that we have access to the following functions: * boolean isSimple(Type T): determines if a type is simple * NameList fields(RecordType RT): get the list of fields names in a * record type * Type getField(RecordType RT, Name fieldName): extract the type of * a particular field * boolean containsField(RecordType RT, Name fieldName): determine whether * a record type contains a particular field. * boolean simplyAssignable(SimpleType T1, SimpleType T2) * determines if a variable of simple type T2 can be assigned to a * variable of simple type T1 */ boolean assignable(Type T1, Type T2) { // Are the both simple types? If so, make sure that they're the same // simple type. if (isSimple(T1) && isSimple(T2)) return simplyAssignable(T1,T2); // Is only one a siple type? If so, they're not the same. (There could // be languages in which we allow assignment between simple types and // records, but this isn't it. else if (isSimple(T1) || isSimple(T2)) return false; // Neither is simple. Both must be records. Step through the fields of // T1, making sure that there is a field with the same name in T2 that can // be assigned to the corresponding field in T1. else { foreach fieldName in fields(T1) { if (!containsField(T2, fieldName)) return false; if (!assignable(T1.getField(fieldName), T2.getField(fieldName)) return false; } // foreach // If we've gotten this far, every field has an assignable counterpart. return true; } // both are records } // assignable(Type,Type)
Surprisingly, polymorphism is a term that many computer scientists seem to use in different ways.
Find three definitions of polymorphism other than those in our book and course web. At least one definition must come from a book, rather than from the Web. Write down the definitions.
Most of you did poorly at citing things, although I was relatively lenient in the penalties. A citation to a published, hardcopy, document should include both publisher and year along with author and title. A citation to a web page should include the author (if known), the date the page was last modified, and the date the page was last accessed.
A few of you found definitions of polymorphism outide of the field of computer science. I had phrased the question to suggest that I wanted definitions from within the field (``polymorphism is a term that computer scientists use ...'').
How are the definitions similar?
The ``similarities'' sections were the worst parts of most answers. Many of you claimed that quotations were talking about functions (or objects, or whatever) when, in fact, they said nothing about functions (or objects, or whatever).
How are the definitions diffedifferent?
Write a clear definition of polymorphism, based on the definitions you have found (as well as those in the book and course web).
[Instructions] [Search] [Current] [Syllabus] [Links] [Handouts] [Outlines] [Assignments] [Labs]
Disclaimer Often, these pages were created ``on the fly'' with little, if any, proofreading. Any or all of the information on the pages may be incorrect. Please contact me if you notice errors.
This page may be found at http://www.math.grin.edu/~rebelsky/Courses/CS302/99S/Handouts/examsoln.03.html
Source text last modified Mon May 3 13:19:12 1999.
This page generated on Mon May 3 14:30:23 1999 by SiteWeaver. Validate this page's HTML.
Contact our webmaster at rebelsky@math.grin.edu