What AI and Animal Rights Have in Common

The current state of research in cognitive science assumes that the mind is what the brain does. This may not seem very profound at first, but its implications are interesting: whatever occurs in the brain is a physical process; there is no experience outside the brain; whatever we ascribe to “spiritualism” or “the soul” is the product of neural activity.

This includes consciousness. We have no reason to believe that there is anything special about the human brain that allows for our capacity of self-awareness. The current running theory is that consciousness is an emergent property of a brain capable of processing massive amounts of information, so much so that we think about how we’re processing said information.

This has implications for both animal rights and the field of artificial intelligence. Consciousness is probably not an on-and-off switch: the philosopher Thomas Nagel, in a thought experiment about consciousness, asked the question, What is it like to be a bat? Most likely, it is like something to be a bat, and that’s consciousness. But we would hesitate to say that a bat’s conscious experience is anything like our own.

Much more likely than the on-and-off switch is a continuum of consciousness, with humans on the far right side and “lesser species” like bats on the left. This would be the only plausible explanation from an evolutionary standpoint: imagining two consciousness-free parents giving birth to a child who grew up to have the world’s first existential crisis provokes as much laughter as the creationist canard of a half-man-half-monkey. At some point in a species’ development, consciousness emerges, and it grows as that species’ cognitive ability grows.

Many species have demonstrated consciousness, some more rudimentary than others. Crows, pigs, and several other species are capable of solving puzzles. Animals such as dolphins, fish, and chimpanzees have proven to be self-aware, including by recognizing themselves in a mirror. While capacity to suffer is a far more useful rubric for determining whether to afford moral interest to a species, many people focus instead on conscious experience and don’t seem to understand how many different species are capable of it in various forms.

The leaders in the field of artificial intelligence presume that consciousness will emerge in their systems as well. There is no reason to believe this will not occur: just as consciousness is not exclusive to humans, it may also not be exclusive to brains, and complex computer programs running on sufficiently powerful hardware should give rise to self-awareness. This is why some in the field are panicking: self-aware machines can themselves design machines (conscious or otherwise) that carry out their own goals; we better make sure those goals align with our own.

There may come a day, then, when we have conscious, artificially-intelligent machines in our midst while many of us continue to deny that nonhuman animals – complex brains and all – are capable of conscious experience. That would certainly be a brave new world.

Liked it? Take a second to support Evan Anderson on Patreon!
Become a patron at Patreon!

Leave a Reply