A popular subject of research today is in the nature of consciousness and whether computers can be made to replicate human thought. The 'holy grail' of this research is the creation of A.I., Artificial Intelligence -- computers that can think independently in ways that go beyond the initial programming or foundations provided by human creators. If humans manage to do this, it will help us understand so much more about the nature of consciousness and thinking than we currently know. Will this succeed? Should we want it to succeed?
I'm sure we've all seen at least some of the many stories about the possible negative consequences of creating Artificial Intelligence, but those stories are usually more allegories about current situations than genuine warnings about likely risks, aren't they? It's implausible that the creation of A.I. is 100% risk-free -- every scientific and technological advance carries risks of some sort, even if sometimes minor. Every time we learn how to do something, we open the door to people doing something improper with that knowledge. We shouldn't get too carried away, though, with dystopic visions of a future laboring under robot overlords.
On the other hand, there are a host of ethical issues that could be unleashed by the development of Artificial Intelligence. Would intelligence make a machine sentient? Would an Artificial Intelligence have rights or duties? Could it be allowed to vote? What if it wanted to reproduce and create new Artificial Intelligences that would now lack direct human input or controls? Eventually A.I. could become cheap enough to install anywhere, but would you really want your toaster or television to be invested with a sentient intelligence? What if it refused to let you watch programs that are insulting to intelligence?
Religious reactions to the development of Artificial Intelligence would be interesting. On the one hand some would surely use it as "evidence" that an intelligence requires an intelligent designer; on the other hand, this would definitely be evidence that an intelligence doesn't require anything supernatural, transcendent, or divine as an explanation. Believers in souls would be faced with the question of whether an Artificial Intelligence has a soul. If not, it would be harder to defend the belief that humans must; if so, it would be harder to defend the idea that souls are supernatural and immaterial.
I wonder what an Artificial Intelligence would think about such debates?