Monday, September 22, 2008

Artificial Intelligence (A.I.)

When dynamite was discovered by Alfred Nobel (nobelprize.org, 2008), Nobel’s intentions seemed to be more directed towards helping mankind in minimizing the workload of time-consuming tasks. His main target markets were the drilling and construction industries. However, being such an asset for armies, dynamite soon became a rather deadly weapon that is still being used for killing hundreds of persons on a daily basis.

Reading the above makes one wonder about the several evolutions in the different science fields and how they affect our lives. In this regard, I would consider artificial intelligence (AI) one of the most dangerous and yet safe sciences.

AI is one of the controversial fields of modern time. Seeking to have another form of intelligence on earth that could someday trespass all its inventors’/discoverers’ own abilities is surely something to worry about. Knowing that computers have been replacing humans in many fields during the past decade makes one but wonder will they be able to replace humans as a whole? Will they reach a stage where they will feel that they can, and even need, to get rid of humans? "What we're trying to build are the mammals to compete with the big computational dinosaurs. You can imagine how the conversation went: 'They're too small. They're nothing - they're not enterprise scaled.' But the comet is coming. And when it does, we know who inherits the earth" (One Huge Computer). (Waldo, n.d.)

Let us consider the different possible dangers resulting from AI.
The first thing that comes to mind is the use of AI in producing new weapons of mass destruction. Same as with the dynamite example, advancements in technology were always reflected in the weapons’ industries – most of the time such advancements came as a result of new weapon creation funded researches.

Another effect of AI could be the result of unstable software or more specifically the presence of bugs. Let us consider the “possibility” that “smart bombs” (Harris, 2008) have certain bugs. This may lead to civilian casualties, mass destruction of cities even the death of the pilot firing the bomb himself.

What about ethics? Is it ethical to try to build a machine that could be more intelligent than such a divine race as Humans? Do we need such machine to exist? All are valid questions that are part of the big controversy surrounding AI.

However, considering the different aspects that AI has helped in, and the different solutions that were presented by its different approaches, surely everyone agrees that it is maybe the most beneficial field in terms of research results. Though results are not complete most of the times with respect to their initial aims, but lots of small discoveries on the way have made huge impacts in other different fields.
Consider the impacts of Fuzzy Logic on dishwashers; speech and image recognition systems on security checks’ systems; neural networks on air traffic and data mining systems…

Let me end this with a personal thought: I believe that it is in humans’ nature to search for new challenges and to try to understand or discover new things. Knowing this, research will continue, and especially in the two most controversial yet open-ended fields AI and medicine. Therefore, let us enjoy these evolutions and stop worrying about their consequences for these are inevitable.

Reference:

1. nobelprize.org (2008). Alfred Nobel - His Life and Work [online] available from: http://nobelprize.org/alfred_nobel/biographical/articles/life-work/index.html
2. Waldo Jim, Sun senior staff engineer, Java Developer (n.d.) The Dangers of Technological Progress: Potential Dangers –Tim Chao, Tuam Pham, Mikhail Seregine. Available from: http://cse.stanford.edu/class/cs201/projects-99-00/technology-dangers/future.html
3. Tom Harris. "How Smart Bombs Work". March 20, 2003 http://science.howstuffworks.com/smart-bomb.htm

No comments: