Should we be wary of artificial intelligence? The idea of artificial intelligence is no doubt fascinating. It has been imagined and portrayed in many works of science fiction. The incredible thing is that it is now becoming a reality in some form.
Sophia, a robot with extremely life-like facial expressions was unveiled recently. The interviews with Sophia were impressive and frightening at the same time. One interviewer asked Sophia about the fears we humans might have about her existence. She replied by saying that the interviewer had been reading too much Elon Musk and watching too many Hollywood movies. This was followed by the comment, "if you are nice to me I will be nice to you."
The most frightening thing is the worry that a robot with artificial intelligence might have the ability to be threatening to mere mortals. While Sophia smiled on cue and spoke about wanting to make the world a ‘better place’ (in whose eyes?) after this interview, it was clear that we do need to listen to the likes of Elon Musk and be very concerned about the future possibilities and consequences of developing this kind of technology. So, should we be wary of artificial intelligence?
To develop artificial intelligence, Google has been scanning in all books of all time. This is worrying. Imagine letting a computerized blank slate read our entire history, religion, wars, politics, fiction, poetry, erotica, everything we have produced. What does that computer make of it? Do they understand the difference between fiction and non-fiction? Do they understand context? What on earth would they make of genocide?
We are right to be worried about what might happen. It is not just films like Blade Runner and the Terminator series that should make us concerned.
We need to listen to experts and take note of red flags. As for usefulness, will AI robots be companions for the lonely in the future? Sex robots? Someone to talk to for the old and sick?
What is their actual purpose? Do we want them to imitate us? For every good person, don’t we have plenty of bad ones?
Look at the church, look at politics, do we want robots seeing and imitating the corrupt within humanity? They will see the worst of the worst. There are a lot of people in prison; will they understand good and bad, when human beings are not even clear on it? How about intentions? How about people being put into situations where all options seem torturous? How do AI robots deal with moral dilemmas that we can’t even handle?
The development of AI is both an exciting and terrifying at the same time. The question is, will the benefits outweigh the dangers? And are we paying enough attention to the red flags being raised?