
Big Data and Psychometrics
The Future with Artificial Intelligence
Artificial Intelligence (AI) is poised to disrupt our world. With intelligent machines enabling high-level cognitive processes like thinking, perceiving, learning, problem-solving and decision making, coupled with advances in data collection and aggregation, analytics and computer processing power, AI presents opportunities to complement and supplement human intelligence and enrich the way people live and work.
On the other hand, some of the leading scientists and thinkers have warned about 'technological singularity'. Technological singularity refers to the belief that ordinary humans will someday be overtaken by artificially intelligent machines or cognitively enhanced biological intelligence, or both. For instance, there were two AI chatbots created by Facebook to talk to each other, but they were shut down after they started communicating in a language they made for themselves.
This also brushes against another big topic related to AI—consciousness. If an AI became sufficiently smart, it would be able to laugh with us, and be sarcastic with us, and it would claim to feel the same emotions we do, but would it actually be feeling those things? Would it just seem to be self-aware or actually be self-aware? In other words, would a smart AI really be conscious or would it just appear to be conscious?
This question has been explored in depth, giving rise to many debates and to thought experiments like John Searle’s Chinese Room (which he uses to suggest that no computer could ever be conscious). This is an important question for many reasons. It affects how we should feel about Kurzweil’s scenario when humans become entirely artificial. It has ethical implications—if we generated a trillion human brain emulations that seemed and acted like humans but were artificial, is shutting them all off the same, morally, as shutting off your laptop, or is it…a genocide of unthinkable proportions (this concept is called mind crime among ethicists)? For this post, though, when we’re assessing the risk to humans, the question of AI consciousness isn’t really what matters (because most thinkers believe that even a conscious ASI wouldn’t be capable of turning evil in a human way).