An Algorithmic and Social Introduction to Computer Science (CSC-105 2000S)

Assuming that autonomous systems (like Star Wars) could actually be made to work, what, in your mind, are the most salient issues with regard to this kind of technology?


I think that one of the biggest fear that people have of autonomous systems is that we cannot expect them to be 100% reliable because they may easily make mistakes and are prone to failure. Some of the main concerns with the star wars project was false alarms and action being initiated because of the false alarms. Another concern is that once a system becomes autonomous, what is to say that it will maintain loyalty. I guess what it all boils down to is that with autonomous systems, we would need to have faith and basically trust that these machines will do what we want, nothing more and nothing less. With that in mind, is that really autonomous system? No, that sounds more like a smart and obedient computer.


First of all, although the "Star Wars" program itself is dated, the concept has not been scrapped. It seems F&M were right on the mark when they predicted that the scope of the project would have to be decreased if it was going to continue, because this seems to be exactly what has happened. In fact, from what I've heard on the news, our proposed "missile defense system" has been a major issue recently in arms control talks between the US and Russia. Now, I don't know how much the missile defense system being proposed and heavily pushed by Republican legislators resembles the old "Star Wars" idea, but from what I understand, it is just the a scaled-down version of the same thing.

I think that the most salient issues that with regard to this technology have to do with the fact that you can never really be sure that it WILL work. I have a really difficult time believing that such a complicated system as the originally planned Star Wars program (or even one significantly less complicated) could be made to work, there are just too many factors to consider. Plus, wouldn't some country have to shoot a bunch of missiles at us in order for us to KNOW that it works? (Then again, I suppose much of the point isn't actually the ability to actually shoot down missiles, but to convince "rogue nations"--as they're calling them nowadays--that we COULD so that they wouldn't even try.) Even if I try to assume that we think we've totally tested it and it's totally behaving as it should, the big issue has to do with whether we really want to entrust power over all kinds of real weapons to something that has no human input. (Remember the movie "War Games"--about a kid accidentally fooling the computers at NORAD into behaving like a war with Russia was starting--wasn't it great? Okay, I guess it sucked, but it seemed totally cool when I was 10.)

Regardless of if it works or not, I think a system of this scale is just a big waste of money, another 'techno-fix' meant to convince Americans that they're "totally safe" that sucks money away from domestic programs that could be improving life for all Americans and international programs that could be having an actual impact on evening the global balance of power (rather than just consolidating it), which might prove to decrease the chances for attack.


Okay, so I did the reading on Thursday night before I went out of town for the weekend and now I'm wondering if I read the wrong chapter. In fact, I'm pretty sure I did. I'm guessing you changed the assignment in class on Friday and it hasn't been posted on the website? Anyways, here is an answer off the top of my head. The most problematic and questionable things about an autonomous system like Star Wars are its ethical implications. What we are willing to trust technology with (the potential to destroy nations) and how this technology then affects war. Given a system like Star Wars, and even the technology we do have now, war has become increasingly impersonal, and along with that, increasingly inhumane. Not that war has ever been "humane" per se, but the added distance and impersonality allows us to do even more horrible and intangible things to people. Things we wouldn't (hopefully) imagine doing in person. Take even the bombing of Hiroshima for example. And that wasn't even autonomous. So now let's consider what if it was, and the cost of this system screwing up even once. Not worth it, I tell you. Not worth it at all. Because then all of a sudden it is everyone and no one's fault at the same time.


First, I don't like autonomous systems. Even if they can be tested, I think that computers should always have some human checks built in--I don't trust a computer enough to do any thinking exclusively for my own well being, or the well being of others. The article made a good point: "Can the realities involved in the deaths of millions be understood by anything other than a human being?" I respond negatively, even if the autonomous systems are not related to defense. There's no compassion chip yet. I don't want one either.

Second, I don't really think this technology is safe to rely upon. I don't think we should be striving for technological revolution on this scale. I find it very interesting that the article points out: "Perhaps we should dig much deeper into matters of a moral kind by pointing out that the bulk of research scientists in the developed world (CS scientists included) carry out research for the military." Though this article is very dated, I think this fact generally still holds reliable. It would be great if society could turn our attention to the more benevolent applications of computers, instead of constantly thinking about how billions can be spent "like much basic research, which may or may not end up being applied in real applications..." (Quotes from pg. 180) Especially when this applications seem to be primarily confined to defense and military systems.

Third, the most salient issues are 1. the recognition of our societal priorities in a fashion that can truely benefit the quality of life for people, not the quality and volume of business or defense (which I acknowledge has a certain place in technological development--no more WWI situations is a plus for all.) 2. the recognition of our limitations to date. (don't ignore the last words, they modify limitations in such a way to make clear that limitations may change, and some day technology may exist to solve problems today considered near impossible.) and 3. Don't let retired actors come up with any more ideas based on science fiction and force society to pay for it.

Finally, I just want to say that this article was very amusing, especially this fine sentence, which seems like a really funny joke to me (my dad was in the military--and he worked in intelligence and computing for the government): "And in the great tradition of counter-counter-counter intelligence, some warheads are designed to look like decoys while some decoys are designed to look like warheads pretending to look like decoys and so on."



Autonomous systems must be reliable and simple in order for their utility to be maximized. If systems that require very little human input are to have a large role in human society, even the technologically backwards need to be able to use the autonomous systems. The autonomous systems need to be able to recognize human language, be able to respond in human language, and be able to adapt to different humans.

Also, autonomous systems must be reliable. The systems cannot make mistake; if the systems do make mistakes the effects of the mistakes will be costly to correct and the autonomous systems will be costly to fix. Furthermore, if autonomous systems are unreliable, then humans would be unwilling to give autonomous systems responsible tasks. Allowing an autonomous systems to rake leaves is a very different task than allowing a system to drive your children to school. Until the autonomous system is proven reliable, the system will play a relatively small role in society.

There will be certain horror stories about the failures of autonomous systems. These stories will hope to inflame public passions against these machine in order to remove the machines from society. The response to these horror stories by the producers of the machines will be to blame faulty human input. In order to diffuse this issue, autonomous machines will need to process what the humans actually mean rather than what humans actually said. That task is too hard for most humans to accomplish.


My initial reaction is to pursue this area of AI and try to make autonomous systems. I think they would just improve our way of life (in general) just as computers have, leaving them to do simple, repetitive tasks and leaving humans to do more complex functions. This would be the ideal world, though; it would only be a matter of time before they got to be smarter and more complex themselves, becoming an entire new species and eventually capable of taking over the world! But even if they never rose to the status of being equal or better than humans, there are many arguments against making autonomous systems. One is that more low-skilled laborers would be put out of jobs, and another is the idea brought up in the reading: would you want a computer to make crucial decisions like when to fire defense missiles (or lasers, etc)? I would object to it simply on the idea that someone (or a lot of people) could possibly be killed if the computer malfunctioned. That's all for now ...


I think autonomous systems have both their pros and cons. Sure, it's dangerous to have an autonomous, automatic anti-missile system with the potential to send weapons of extreme destruction raging upon the Russians because they decide to implode one of their missile silos, or something to that effect, but I think such a self-defense system could also be useful because, as the paper mentioned, if an incoming missile isn't shot down very quickly, it becomes many missiles.

As far as other kinds of autonomous systems go, I think similar issues are important. If we give such systems great power because they can do things we can't, then we're obviously going to have to risk major consequences involved in a computer system which, like all other major systems, WILL have a bug in it somewhere. Perhaps that's a little unfair to the programmers, but repeatedly I've been finding, not just in the reading, but through experience, that things can go wrong with any program in any way at any time. An automatic security system at a high school in Alabama could suddenly decide that a laser pointer keychain is a gun, and then the school board would be sued by the children who are covered in tear gas when the child with the pointer ignores the computer because he knows he doesn't have a gun. I think these systems would be great in a perfect world, but until this place gets perfect, or we find someone who writes flawless code 100% of the time, we should avoid them for awhile and let we flawed humans make the mistakes; our responses are, at least in my view, less likely to be over-reactive.


I think the main thing with such systems is that we shouldn't get ahead of ourselves. Humans can get very ambitious, and sometimes we may try to invent the car before the wheel. Eventually, a system like the one we read about will probably be possible, and I think eventually it will be possible to create it safely and correctly. The emphasis here is on "eventually". Our technology has a long way to go before then, and we should focus on things that are attainable before we reach so far beyond our capabilities.


I think that the language of the article raises a number of issues. Particularly interesting is the balance between response time and human judgment. To put that much faith in a computer to efficiently and correctly make decisions that weigh on global well being may be too much. Maybe the same can be said for automated airplane flying, or anything that puts human life at risk. I personally don't feel comfortable contemplating these scenarios. I feel like human beings should be able to control and harness everything that we create, and this exceeds that realm. There is always going to be the possibility for human error, but there are also exceptional people capable of filling positions and tasks that I have more faith in than computers. Human error is at least accounted for. And if ultimately, we are trying to create computers that think like humans, would it not be human error in the end that would be a mistake?

Disclaimer Often, these pages were created "on the fly" with little, if any, proofreading. Any or all of the information on the pages may be incorrect. Please contact me if you notice errors.

This page may be found at

Source text last modified Wed Feb 16 08:16:14 2000.

This page generated on Tue May 2 10:46:13 2000 by Siteweaver. Validate this page's HTML.

Contact our webmaster at