Fringe Science, Theory and Fun!3

No matter the form it might take, most of us have thought about what an artificial intelligence or simulated consciousness might be like. Would it experience emotions like we do? Would it decide to treat us as mere cockroaches to be squashed under their titanium feet?  But what about the philosophical implications in creating a consciousness? What about our responsibility to this new creature? There is a lot we should consider before producing an intelligent and sentient being and here are a few things to think about.

 

The Parental Role of the Creator

We take for granted our intelligence and even more so our knowledge. Even reading this blog post, you are demonstrating decades of learning, practice and experience. Think about riding a bike, carrying on a casual conversation, or even consciously or unconsciously making hundreds or thousands of decisions every day. These are things we’ve learned through experience and through the guidance of a person other than yourself.

Most people would assume knowledge and experience could simply be dumped or downloaded into an artificially created being but would that lead to a true consciousness? By downloading pure information, we are simply creating a catalog of responses for the AI to use in preparation to external stimuli. Would this lead to the AI being able to freely act or would it just carry on a programmed response?

So what would be the alternative? Well, we might need to give the AI the same treatment we had; to be raised from youth to adulthood. It would need to learn responses instead of us providing responses. Of course, assuming we can give this consciousness super intelligence and unbridled learning ability, it could be fully developed by the time it was three years old, but it would still need to learn one step and one experience at a time.

Deciding what and how it should learn

So that leads to the next question. What and how should it learn? The world is full of bias and heaven knows there are a lot of unfit parents teaching their children a whole slew of bad habits and ideologies.  So how do we decide what this sentient being would learn? This is where the real danger lurks. Giving a super intelligent creature a bad opinion on matters such as war, ethics and the like could lead down dark, dark paths. On the other hand, if Buddhist Monks are really onto something with the whole ultimate truth thing, the AI could become “enlightened” and choose a non-violent, benevolent path within a short time of it’s creation. Then again, it could become so apathetic, we’d be dealing with the worlds worst know-it-all teenager.

 

How will others treat it?

At least in the beginning, we can almost guarantee there will be backlash from some of the communities in our society. This will lead to hate crimes, violence and possibly these artificially created beings could be murdered. Unless the world has a huge paradigm shift, this unfortunately would be inevitable. So as the creators, we have a responsibility to protect and prepare them for how the world may treat them.

Should it emote?

With the previous statement in mind, would it be cruel to give them emotion or would it be cruel not to? This is, of course, is assuming emotions are not a natural consequence of intelligence but we’ll leave that for a future discussion. But if emotions are a choice, should we give these beings emotions? All our lives, we’ve been persuaded, guided, hurt and charmed by emotions to the point that we really don’t understand what a world would be like without them. In fact, it’s hard to tell if emotions are a good thing or a bad thing. Sometimes they feel good and motivate us to improve ourselves and those around us and sometimes they are a tool to pain and destruction. As creators, we need to consider the benefits and risks of providing artificial emotions, especially in a world that will be full of both love and hate for their kind.

 

Is it turning off a program or murder?

One of the most important things we need to consider is the life or duration of the creature. If we fully develop the technology to create an intelligence, then we will most certainly be faced with deciding the duration of it’s lifespan. This raises an important question: Is it simply turning off a process simulating life or would it actually be extinguishing life and therefore murder? We ourselves, as of yet, have an expiration date. Most people do not have a say as to when or how they will expire but it is inevitable for all of us. For these creatures, we are the ones making decisions on their life and death.

 


 

There are some serious implications and responsibilities to consider before bringing any kind of sentient being into existence. Right now, we’ve been more interested in learning if creating artificial consciousness is even possible. What’s needed, is some time to consider the responsibility of creating artificial consciousness before we ever click that power on button.



2
  • Zach

    December 5, 2013

    Many of the points made in the post above have been explored by stories written around the assumption that AI was successfully invented. These are very powerful ideas because they reflect right back at ourselves. I think it is very productive to try and create as much objectivity around these ideas as we can.

    So, it is my opinion that consciousness is an emergent attribute of the mind of any motile organism. Further this attribute, like most such things, is on a continuous scale.

    To this end it is helpful to think about the mind as an engine that takes sensory inputs, and the memory of the organism, as its input. This engine then processes these inputs with the purpose of creating a model of the inputs. Next the engine runs the model to make short term (and longer term) predictions that reflect how the actual environment will change and affect the organism. Finally the engine sends out commands to the organism that maximize its chances of successfully negotiating that environment.

    This model building and execution capacity of the mind is not restricted to just sending commands but also has recursive attributes that cause the model to be modified. A good way to think about this ability to model the models is say it gives rise to another emergent attribute we call consciousness. Since this is all on a continuous scale then some organisms are more conscious than others.

    It is important to realize that it is not the case that organisms that are not human are some how lower or inferior to humans but instead when we realize that this high degree of recursive modeling ability that gives rise to consciousness is not necessary for survival but instead is just a side effect of our ‘big-brains’.

    What this means for AI is that we can not expect to invent it full blown but instead it will be one of those things that starts out not looking at all like it is conscious but over time gains more and more of the attributes of what is called conscious.

    For these reason I do not think the points made in the post above are very worrisome because they will be automatically resolved as the artificial organisms that exhibit AI become more and more complete (or conscious). I think the sense of the decisions discussed above will not present themselves as deliberate choices but instead will be built-into byproducts (side effects) of how the artificial organism is constructed.

  • History and Approaches to Artificial Intelligence » TheorySerum

    April 2, 2014

    […] I wrote an article about the responsibility of creating consciousness and raised a few ethical questions we need to answer before we hit the ON switch before creating […]

Say Something