No matter the form it might take, most of us have thought about what an artificial intelligence or simulated consciousness might be like. Would it experience emotions like we do? Would it decide to treat us as mere cockroaches to be squashed under their titanium feet? But what about the philosophical implications in creating a consciousness? What about our responsibility to this new creature? There is a lot we should consider before producing an intelligent and sentient being and here are a few things to think about.
The Parental Role of the Creator
We take for granted our intelligence and even more so our knowledge. Even reading this blog post, you are demonstrating decades of learning, practice and experience. Think about riding a bike, carrying on a casual conversation, or even consciously or unconsciously making hundreds or thousands of decisions every day. These are things we’ve learned through experience and through the guidance of a person other than yourself.
Most people would assume knowledge and experience could simply be dumped or downloaded into an artificially created being but would that lead to a true consciousness? By downloading pure information, we are simply creating a catalog of responses for the AI to use in preparation to external stimuli. Would this lead to the AI being able to freely act or would it just carry on a programmed response?
So what would be the alternative? Well, we might need to give the AI the same treatment we had; to be raised from youth to adulthood. It would need to learn responses instead of us providing responses. Of course, assuming we can give this consciousness super intelligence and unbridled learning ability, it could be fully developed by the time it was three years old, but it would still need to learn one step and one experience at a time.
Deciding what and how it should learn
So that leads to the next question. What and how should it learn? The world is full of bias and heaven knows there are a lot of unfit parents teaching their children a whole slew of bad habits and ideologies. So how do we decide what this sentient being would learn? This is where the real danger lurks. Giving a super intelligent creature a bad opinion on matters such as war, ethics and the like could lead down dark, dark paths. On the other hand, if Buddhist Monks are really onto something with the whole ultimate truth thing, the AI could become “enlightened” and choose a non-violent, benevolent path within a short time of it’s creation. Then again, it could become so apathetic, we’d be dealing with the worlds worst know-it-all teenager.
How will others treat it?
At least in the beginning, we can almost guarantee there will be backlash from some of the communities in our society. This will lead to hate crimes, violence and possibly these artificially created beings could be murdered. Unless the world has a huge paradigm shift, this unfortunately would be inevitable. So as the creators, we have a responsibility to protect and prepare them for how the world may treat them.
Should it emote?
With the previous statement in mind, would it be cruel to give them emotion or would it be cruel not to? This is, of course, is assuming emotions are not a natural consequence of intelligence but we’ll leave that for a future discussion. But if emotions are a choice, should we give these beings emotions? All our lives, we’ve been persuaded, guided, hurt and charmed by emotions to the point that we really don’t understand what a world would be like without them. In fact, it’s hard to tell if emotions are a good thing or a bad thing. Sometimes they feel good and motivate us to improve ourselves and those around us and sometimes they are a tool to pain and destruction. As creators, we need to consider the benefits and risks of providing artificial emotions, especially in a world that will be full of both love and hate for their kind.
Is it turning off a program or murder?
One of the most important things we need to consider is the life or duration of the creature. If we fully develop the technology to create an intelligence, then we will most certainly be faced with deciding the duration of it’s lifespan. This raises an important question: Is it simply turning off a process simulating life or would it actually be extinguishing life and therefore murder? We ourselves, as of yet, have an expiration date. Most people do not have a say as to when or how they will expire but it is inevitable for all of us. For these creatures, we are the ones making decisions on their life and death.
There are some serious implications and responsibilities to consider before bringing any kind of sentient being into existence. Right now, we’ve been more interested in learning if creating artificial consciousness is even possible. What’s needed, is some time to consider the responsibility of creating artificial consciousness before we ever click that power on button.