An Algorithmic and Social Introduction to Computer Science (CSC-105 2000S)

How would you classify the recent Web denial-of-service attacks? What techniques might have been used to avoid them?


Keeping in mind Buck BloomBecker's list of motivations, the criminal probably committed the crime for the sake of doing so. Even with that, there is no telling what the criminal was thinking. Unlike with a denial-of-service attack to is specifically intended to disrupt business. But, what benefits or reasoning would one have to disrupt a school? Fun? Inhibit student, faculty, or administrator's access? The possibilities are endless.

As Forester & Morrison claim, there are several reasons as to why many top-level managers do not implement tighter security measures. Besides not understanding the security measures, most of them are not willing to sacrifice the ease and convenience of their current system. Similarly, while some programmers overlook some basic security measures, other programmer's intentions are to keep the system convenient and simple. They do not wish or aren't not told to create strict security measures.

I suppose one way of combating such an attack would be to place a limit on the amount of data that can be access by a single user or a group of users at a time. This would make it more difficult for a hacker(s)to access the same amount of data or disrupt the system. However, in doing so, it would make it difficult for legitimate users who wanted to access large amount of data. He/She may be screened or denied service. So, in making tougher security measures, you eliminate the ease and convenience of accessing data on computers. The question is: Is this a good trade-off?


From what I know about them, the recent denial-of-service attacks have not yet been shown to have been in service of any kind of theft or destruction of property. Rather than bringing the perpetrators any economic gain (unless, I suppose, they held stock in companies who competed with those they hit), the attacks represent unauthorized access or use (and temporary shut down) of systems, and seem to involve some level of technical skill, making them atypical (at least according to Forester and Morrison). The perpetrators could have been motivated by BloomBecker's "playpen," "soapbox," "fairyland," or "battle zone" mentalities.

I'm not sure how the attacks could have been avoided. There would have to be some kind of program or something that prevented rapid, multiple requests from the same place...something that limited information requests to only so many pieces of data to one location in some amount of time. However, I'm sure that there would be a way to get around that too.


How would i classify them in terms of what? Categories of social problems created by computers? i guess denial-of-service attacks would fall into a number of categories if we really think about it: security, censorship/control, accessibility, reliability... what boggles me more than anything are the motives behind such actions though. unless it's one company trying to sabotage the other, i don't see what a person would really get out of committing these attacks, other than maybe an ego trip. according to forester and friend that seems to be one of the main motivators of computer crimes, simply the thrill of being able to do them combined with the impersonal atmosphere that removes any sense of guilt or wrongdoing. i find that really disturbing.

What might have been used to avoid them? i really have no idea. of course these websites want to be open to as many people as possible so i don't know how you'd even go about trying to distinguish between real customers and trouble makers. sorry that's not very helpful but i don't think i know enough about the possibilities of computer security to really address that concern.


The recent Web denial-of-service attacks would be classified as a computer crime against the network or server they are bombarding or overloading with information. In a society where time is money, when servers are down, that is business lost. It's not really hacking into a system, and it doesn't seem to lead to any direct benefit for the person causing the system problem. Rather, it seems that the denial-of-service attacks are malicious blows struck against a system to cause inconvenience and problems for others.

As far as techniques which could have been used to avoid this problem, I suppose the most effective would be some sort of detection software that would monitor incoming transmissions looking for whatever the digital signs are in cases where denial-of-service attacks are made. Beyond that, it seems that the most important other aspect would not be increased security, but faster computer processing and increased memory size would help avoid the overloading of the system. But in that case, it seems like a computer criminal would just develop a more robust transmission output of proportions adequate to match the targeted system. We'll have to build smarter computers (or software programs) that can detect it earlier when they are about to be screwed...and then do something to disable the attack.


It seems like the recent denial-of-services attacks on the web were fairly straight-forward acts of sabotage, not designed to extort or blackmail, but merely to screw the companies for the sake of screwing them. I would imagine that these acts were committed by people who are in some way disgruntled with the various companies targeted. Maybe one way to increase security would be, like hotmail, to require people to sign-in in order to access the site. This might be a way to at least limit access and make it a bit more difficult for hackers, though instituting something like this with a site like Amazon or Yahoo would also likely significantly decrease site traffic. Of course, for the hacker who was serious enough about doing something like this, measures like passwords would likely be more of a hindrance than an actual deterrent.


The denial of service attack is a white collar crime that has victims. The victims are the users that are unable to use the websites. The corporations that run the websites are also victims. The targets of the attacks are very visible corporations (Yahoo!, CNN, Amazon, eBuy, etc.). The motivation for the attacks is not money unless it is blackmail. The opportunity to disrupt service is the only accomplishment of the denial of service. It might be a game or intellectual exercise for the hackers? The denials of service are not from employees of the corporations.

The obvious solution is to increase the speed of the servers. A technique could be to limit the number of requests a single user can make in a certain amount of time. Another idea would be to distinguish between the legitimate requests and illegitimate requests and only respond to the legitimate requests. The public service nature of the websites possess the dilemma of asking a user to enter a password. If there was a password system, would the hackers be able to store the password and continue to deny service?


The latest in denial-of-service accounts are called "smurfing", which uses "bounce sites" to increase the level of traffic to the victims up to 200 times. The only technique I can think of is not even a full proof one - to not allow people to pick there own passwords, and to pick random difficult ones, since most DoS attacks are committed from computers which were hacked into.


I tend to classify these denial-of-service attacks as issues that deal partly with reliability and partly with security. This certainly sends the reliability of these various Web sites down the tubes, but also shows how the Web sites have neglected the possibility of such security breaches. It seems to me that it would be fairly simple for a Web server to write and employ an algorithm which keeps the number of pages downloaded by one particular browser to a certain number per minute, hour, or other desirable time-frame. It could even continue to permit the Web browser to download pages; however, those with high downloads can be set down to a lower transfer rate so as not to disrupt the other browsers on the site. That would also permit legitimate users who download excessive amounts to continue to download without being too upset with the company involved for telling them they've been downloading too many pages.


I think it is hard to classify this problem because it doesn't really fall well into any one category, and that is why it might be a hard problem to solve. It doesn't have to do with ownership. It only slightly deals with access in that people were denied access, but not by the people providing the service. What is to be done when a completely independent individual can cause denial of access from somewhere? Reliability comes into question when one asks who is responsible for noticing or preventing such acts. Is there any way for websites to track who is hitting their sites and how often? Or would this violate more honest shoppers (by monitoring their activity) instead of stopping the criminals? Security is definitely an issue, but there again it is in a confusing way. In this example, nothing was broken into, cracked or stolen. Does that constitute a breach of security? How can you distinguish from someone who just wants to visit your site a lot?


Although nothing was literally "stolen" during these attacks, there is no doubt that they are computer crimes. What did occur was a stealing of potential clients and sales, as well as trades and information. There were definitely instances of breaking into unauthorized systems and using them as tools in these crimes. I guess I would qualify these as sort of indirect theft, the loss coming as an intentional consequence of the hackers' doing. As far as avoiding these crimes, Forester and Morrison really pointed out that most security lapses are due to human error. Their suggestions seem pretty appropriate, such as limiting password and vital information access to a minimum. Maybe there is a way to limit the number of pages a site can service, denying service to any additional number of requests past a certain number. But this doesn't seem too likely for an online company like ebay or etrade, that relies on being able to service as many customers as it can. The larger and more complex a public site is, the more problems with computer crime it seems likely to have.

Disclaimer Often, these pages were created "on the fly" with little, if any, proofreading. Any or all of the information on the pages may be incorrect. Please contact me if you notice errors.

This page may be found at

Source text last modified Mon Feb 14 15:34:24 2000.

This page generated on Wed Feb 16 08:36:09 2000 by Siteweaver. Validate this page's HTML.

Contact our webmaster at