In a paper awaiting peer-review (as of this publication), a research team at Amazon suggests that they have developed an AI model that is displaying “emergent” capabilities. According to the research team, the AI model was able to make leaps in language processing that are common for human language learners, but have, as of yet, been unheard of for AI models.

With this kind of intelligence “emerging,” it is worth asking the question: Are we on the verge of a truly intelligent AI?

 

The Foundational Assumptions

            We live in an age obsessed with science fiction. As a kid, I loved the Will Smith I, Robot adaptation. In that story, we find a homicide detective encounter the subversive plots of newly sentient robots. By the end of the movie, viewers are left to ponder the ethical implications of robots that can supposedly think and reason like their human creators.

I love these kinds of movies, but I have to admit that they are based on certain problematic assumptions: namely that humans are essentially “meat computers” and that life can emerge from non-life.

Regarding the first point, it is a staple trope in both science fiction and modern AI debates that the only major difference between humankind and machine is informational complexity. For example, Ray Kurzweil, a computer scientist and leading proponent of the Transhumanist Movement, suggests that, in the near future, AI will reach and exceed human intelligence. As far as he is concerned, it is only a matter of time before there are sentient machines.

Kurzweil’s argument relies on the assumption that computers are foundationally similar to humans. (Or, more properly, that humans are foundationally similar to computers.) The assumption goes- if a human, which is just a highly complex system of electro-chemical information, came about slowly through gradually complex stages of evolution, then a computer, which is just a complex system of electrical information, can also achieve the same level of complexity as a human.

This assumption is tied in neatly to the second foundational piece, that life can emerge from non-life. The Darwinian assumption is that all of organic life started off as simplistic matter. Somehow, through the combination of various electrochemical reactions, this matter produced the first, simplistic, living organism. Over time, these organisms evolved eventually into humans.[1]

Why would we think that computers, which are starting off at a more sophisticated point than organic life, could not achieve the same kind of evolutionary progress? Further, if humans are really just physical matter made “alive” by electrochemical reactions, why would we not think that scientists could create the same kind of life in computers? The logic here is a good one, so far as consistency goes.

The Ghost in the Machine

To borrow from another science fiction trope, the problem with the above theory is the “ghost in the machine.” Christians have long believed that humans are not just a physical body; they also have a soul. For most Christian thinkers, the soul is where identity is located; in other words, your soul is what makes you, you. A physical body without a soul isn’t a person; it is a corpse.

The soul is not something that humans gain.[2] Instead, it is fundamental to who humans are. The soul is not something that emerges or pops up at a certain point in human complexity; it is something that is generated, along with the body, and the moment of conception. Thus, in contrast to someone like Kurzweil, while humans and computers may have some similarities, they also share a fundamental difference; a human has a soul, while a computer does not, and cannot.

Consider for a moment an example of this difference: humans, through the soul, can think, while a computer can only compute. While we may use these terms interchangeably today, that is mere equivocation.

You, for example, can ponder a dog. Not only can you recognize what a dog looks like, you can contemplate what it means for that dog to be a dog. (If you are in an overly philosophical mood!) A computer, however, cannot do the second part. A computer can potentially recognize a thing that has dog-like features, but it cannot contemplate what it means for that dog to be a dog. Even if it can produce words on a screen that look like contemplation, the computer itself is not actually thinking; it is only computing a programmed algorithm. It does not think about the dog, but merely reproduces information about the dog. The computer may do this in such a way that it mimics a human, but that is only decoration, not actually personalization.

Conclusion

News headlines will continue to sensationalize advancements in computing technology. There is no doubt that computers will continue to get more sophisticated. They will likely continue to grow in their ability to mimic human characteristics. That said, we must also remember that, given a Christian framework, humans are unique in their rational abilities. We alone bear the Image of God.

Therefore, there is no need to worry about the sentient robot uprising.

This post was originally published at the Land Center.