Held Monday, February 14, 2000
Today we return to networking issues at the lowest appropriate
level by considering general issues for direct-link networks.
- A number of you have reported difficulty with
To give you a chance to talk to me about those difficulties,
I am extending the due date until Friday.
- Please do come talk to me.
- In the C that I learned, it was dangerous to pass a struct to
a function or return a struct from a function.
- I'd strongly recommend that you write a wrapper class for
your node class.
- Assignment 1 has
been (mostly) graded.
- Some notes
- We'll spend a few minutes on common errors.
- For Wednesday's class, find out something useful about an
error-correcting code. That is, identify an error-correcting
code and learn how it works.
- Introduction to direct-link networks
- Basics of information theory
- Representing bits
- The problem of framing
- In direct-link networks, the nodes are directly
- You may have two nodes on each end of a ``wire''.
- You may have multiple nodes on some form of ``bus''
(physical or wireless)
- Like our textbook authors, I will emphasize the two-node
- The physical layer provides us with a semi-reliable stream of
bits. But we clearly want more. We might also want
- Error detection and correction.
- Bytes or frames instead of bits.
- Flow control.
- Other services for the network layer.
- There are different services we can provide.
- Simple frames: an unacknowledged connectionless service
- Frames with error detection: ackonwledged connectionless service
- Reliable frame streams: acknowledged connection-oriented services
- What are the issues we are concerned with for directly-connected
- How do we connect the nodes?
- How do we represent bits?
- How do we multiplex the line?
- How do we make chunks of bits (frames)?
- How do we detect errors in those chunks?
- How do we correct errors in those chunks?
- Let the hardware people worry about it. We're software people.
- More realistically, we can use cables of some form (twisted pair,
thin-net, thick-net, or fiber) or radio waves.
- We can also form virtual connections from existing infrastructure,
such as the phone company's networks or the cable company's.
- Yes, you get to pay for that privilege.
- What do we care about when selecting a medium?
- Bandwidth (information/second)
- Speed (for getting initial pieces of information)
- Support for topologies
- There are many different techniques you can use to encode
- Change the amplitude of the wave
- Change the frequency of the wave
- Change the phase of the wave (not mentioned earlier)
- We can use one or more of these to increase our data rate.
- In constellation patterns, the possible values are indicated by
points in a two-dimensional grid, with distance from origin
signifying amplictude and rotation from 0 degrees indicating
the phase shift.
- Okay, we've got to get some (digital) information over an
electrical/optical/radio/whatever connection. How do we do it?
- We need to encode each bit
- Simple scheme: 1 = high power, 0 = low power
- Yes, there are better schemes
- Problem: the sent signal (square wave) often looks little like the
received signal (smoothed out).
- It is affected by noise, delay, and attenuation.
- Not a new realization: Morse code invented 1832, telegraph between
Washington and Baltimore 1843 had this problem.
- Solution: Fourier (early 1800's) noted that any periodcc function
(variation of quantity with time) can be represented as the
sum of sine waves of different amplitudes, phases, and frequencies.
These component waves are called harmonics.
- Why do we care? Because by understanding the effects of a circuit on the
component sine waves, we can understand the effect of the circuit on
more complex signals.
- Particular application: higher-frequency signals are usually highly
distorted. We can filter out higher frequencies or ignore them or
virtually ignore them or ....
- On phone lines, the signal is usally filtered at 3200 hz.
- The difference between the highest-frequency and minimum-frequency on a
channel is called the bandwidth of the channel.
- The data rate is affected by
- the bandwidth of the channel,
- the number of changes per ser second (the baud), and
- the number of bits encoded in each change to the signal.
- The higher the baud, the fewer harmonics we can send.
- So, how much can we really send on a line? Harry Nyquist gave a
sampling theorem, which says that any signal of bandwidth H can
be reconstructed by no more than 2H samples per second.
- Hence, if the signal has V distinct levels, the maximum data rate is
- However, this ignores noisy channels. Shannon says that the actual
rate is H*log2(1 + S/N), where S/N is the
signal to noise ratio.
- Shannon did more than talk about Nyquist's result. He developed a
technique for analying information (that
serves as the basis of what we call information theory).
- Shannon claimed that we can analyze information solely in terms of its
statistical properties. He also suggested that sources are typically
ergodic: the statistical properties of a single message tend
to be the same as average properties over all messages.
- For example, the frequencies of individual letters in English text
are about the same for each document as for English in general.
- The same holds true for pairs of letters and even triplets of
- Shannon discussed entropy: uncertainty in a domain, which
is resolved by messages.
- It is a function of the number of possible messages and the
probabililties of each possible message in the domain of all messages.
Shannon's definition was that entropy is the negative of
the sum over all messages
of (the probability of the message times log2(prob message)).
- As an example, consider a fair coin and an unfair coin.
- For the fair coin, the probability of each message is 1/2.
Hence the total entropy is
- -1 * (1/2*log2(1/2) + 1/2*log2(1/2))
- = -1 * (log2(1/2))
- = -1 * -1
- = 1
- For a somewhat unfair coin, let us say that the probablity of heads is
3/4 and the probability of tails is 1/4. The total entropy is
- -1 * (3/4*log2(3/4) + 1/4*log2(1/4))
- = -1 * (3/4*log2(3) + 3/4*log2(1/4) + 1/4*log2(1/4))
- = -1 * (3/4*log2(3) + log2(1/4))
- ~= -1 * (3/4 * 1.6 + -2)
- ~= 0.8
- For a truly unfair coin, let us say that the probability of heads
is 1 and the probability of tails is 0. The total entropy is
- -1 * (1*log2(1) + 0*log2(0))
- = -1 * (1*0 + 0)
- = 0
- The fair coin has the most entropy. The deterministic coin has
no entropy. The somewhat biased coin is somewhere in the middle,
just as we'd expect.
- Using entropy of noise and entropy of signals, Shannon came up with
the results that the maximum rate is
H*log2(1 + S/N)
- No, we won't derive that result.
Thursday, 20 January 2000
- Created as a blank outline.
Monday, 14 February 2000
- Filled in the details.
- Some sections were modified from
outline 3 of Dartmouth's CS78.96S.
Wednesday, 16 February 2000
- Moved uncovered material to the next outline.
Back to Memory Management.
On to Error Detection.