Ray Hammond’s 2006 “The Cloud” is the best thriller I’ve read in a long time. It combines two hot science fiction themes, the possible existence of other civilizations in the universe and the development of superior artificial intelligence. What if we discover another civilization in the universe? And what if, of the three possible attitudes toward us, namely positive, negative, or neutral, it turns out that they really don’t like us? And what if the super-smart computers we have by then developed to defend ourselves don’t like us much either?
Actually, it’s not a case of the computers liking or disliking us, but the more real and frightening possibility that they will have a tendency to act for their own self preservation, regardless of the consequences for humans. As Hammond writes it, the computers have been set up to talk to us with simulated personalities, so although you know you’re being dissed by a machine, it feels like being disliked.
I’m told that there are think tanks where very clever people are working on ways to give a moral sense to artificial intelligence, but is this the right way to go? We think we’re wonderful, but looked at objectively, it’s not obvious that the ability to distinguish good from evil would make computers try to protect us.
And by the way, the cloud Hammond is talking about is NOT the relatively recent development in data storage.