Earlier I wrote an article about the responsibility of creating consciousness and raised a few ethical questions we need to answer before we hit the ON switch for creating Artificial Intelligence. But what are the approaches that we have taken in the past and what are we currently doing to create Artificial Intelligence or, if you rather, Consciousness? At the end of this post, I’ll give my opinions on how artificial intelligence might be achieved and a few philosophical thoughts.
First, It needs to be said that I am in no way trained or educated, nor do I have any special experience in the field of Artificial Intelligence beyond general computer programming. These are shower thoughts, but in my opinion, good thoughts on some approaches to creating Artificial Intelligence and not just a pre-programmed responding mechanism, which ironically I wrote a previous blog post about our own intelligence as humans.
Where we’ve come from
There’s a paved road AI has walked and depending on your view, it could go way, way back. Skipping over the symbolic creation of autonomous Gods and old philosophers creating metaphors and systems in an attempt to understand human thought, Artificial Intelligence has been around longer than you might think. That is, the concept or attempt at creating Artificial Intelligence.. If we hop over all of that, the technological pursuit for Artificial Intelligence began in the 1940s with the programmable computer, a machine based on the abstract essence of mathematical reasoning. As early as then, several scientists have been making predictions about computers evolving to reach the reasoning and intelligence of humans. In 1956, at Dartmouth College, they started their first serious attempt at making AI possible.
They were very enthusiastic and made the incredibly erroneous assumption that a one to one human intelligence could be simulated within their generation. Unfortunately, they didn’t realize the labyrinth of understanding they would need. They missed their mark by several yards, barely on the map but that lawn dart was still closer to the bullseye than it had ever been been before. Even though they didn’t achieve true Artificial Intelligence, they did spark a strong interest in it. Until 1980, there was a race between governments, most notably China and the United States and a few venture capitalists to create an Artificial Intelligence that they rightfully believed would be worth billions. But by the 80’s, little progress had been made and most of the funding came to a halt.
Some of the early approaches to Artificial Intelligence
So how did they first tackle a problem as big as the great philosophy as “I think, therefore I am”? You see, this is where science gets fun. Real philosophic questions are raised as to what Intelligence actually is, but before we tackle that problem, let’s take a look at some of the early approaches to Artificial Intelligence.
The Loop and Decision Trees
In a video game, the game loop is a timed check to see if anything in the game state has changed. This change can be whether the bullet has hit the bad guy or the soccer ball has gone through the goal. The timing is usually a check in the micro seconds and in AI, it’s the same. A loop of checks are created (which some neuroscience evidence estimates happens in our own brain) and decisions are made. Expert systems emulates decision making in a decision tree by asking a finite number of ifs. In other words, instead of programming a computer to flash three images, we tell the computer if the number one on the keyboard is pressed, make the decision to show the first image. On the other hand, if the number two is pressed then show the second image. This is essentially a decision tree and you can imagine how complicated this decision tree can get, but is this intelligence? I would argue no, or at the very least, it’s an intelligence on a very small scale and very far from our real target.
Unpredictability and symbolism
The Loop requires a clearly defined input and a determined path of decisions. This presents exactly the same two problems with attempts at a full intelligence. It doesn’t give the possibility for unpredictable environments and the chance for real choice (or at least the illusion of choice that we as humans so thoroughly enjoy). Although there is still a decision tree involved in this approach, it focuses more on the ability for the sensors to capture and process unpredictable environments. This is done through sensors that gather data into symbolic forms within unpredictable environments.
Where we are now
Approaches to Artificial Intelligence have come a long way, but out of the woods, we are not. There are a lot of revised theories as to how we can achieve Artificial Intelligence.
The Situated approach is based on Alan Turing’s statement that in order to get to real intelligence, we should stop focusing on tunnel-visioned approaches such as programming a computer to play chess and to focus on creating a robot that takes abstract data through senses and learning and essentially program itself.
Bottom Up vs. Top Down
Without going into to much of the technical detail (which I, myself, can easily get lost in) there are two approaches when an Artificial Intelligence is “making decisions.” The Bottom Up approach can be seen as very elementary decisions that coalesce together into a more complex decision tree of sorts. The Top Down, as you would imagine, is the exact opposite. Beginning with more higher level functions and decision processes. A crude example of this can be with seeing food and being hungry or not. A person from the Bottom up perspective will see food and make a simple decision. Am I hungry? If yes, the decision will be passed to more complex questions such as “What type of food is this?” “Does this food seem like it would be yummy?” “Is health a primary concern with this yummy food?”
The Top Down approach would start exactly the opposite way until a simple decision would be made by the AI.
As you can imagine, I’ve skipped over a lot of data and probably simplified this topic into oblivion. However, if you’ve followed my blog, you know I’m a big softy when it comes to the philosophical side of things. I’m mostly interested in the philosophical questions that are raised with creating AI and how that could be achieved. Here are a few thoughts.
Raising a new kind of child
This point was brought up in the other AI post I wrote about the ethics and responsibilities of creating consciousness. I believe that for any true Artificial Intelligence to have any kind of sentience and as-close-to-free-will as it can get, it needs to be raised as a human baby and child would be raised. Often, we think after building a robot, we would just download all the information it needs into it’s database, turn it on, and engage said robot in a conversation about our shared hatred of fruitcake. All we have to do is make sure it’s loaded up with terabytes of info on fruitcake and you’re set for a good conversation, right? Well, maybe, but it might sound like a conversation you would have with your annoying know-it-all cousin who has nothing but fruitcake facts to share.
Okay, fine. Not just info. Let’s give this robot opinions. You are going to dump an opinion that this robot is a hates fruitcake as much as Lady Gaga hates a a plain pair of jeans. Is this really an opinion or is this just a programmed resonse? Ah, see, see, free will? Determinism! Ready? FIGHT!
Let’s just assume we have free will or at least the illusion of it just for the sake of this post. Directly programming an opinion isn’t actually an opinion at all. It would be a specific response that we programmed to execute when given the right parameters. It’s not unlike typing Google.com in your less-than-sentient browser. Your browser doesn’t have an opinion. It doesn’t decide that Google is too mainstream and send you over to www.AskAHipster.com. So how do we establish opinions and free will? It needs to establish opinions just as we have. Through experience.
I agree completely with the Situated approach and it makes a lot of sense. The best approaches to Artificial Intelligence are the ones that simply receive input through sensors and learn from them until they build for themselves a model of their world. We humans are born with a blank database and very minimalist programming. We eventually build a world through our senses by the causes and effects we witness. This must be the same for AI. They need to be able to have a blank slate that is only written on by their own learning processes until they build a world for themselves and learn to interact with it.
That means the AI needs to come packed full of needs and instincts to motivate it to learn. We as humans come with the need for food and our instinct guides us to our mother’s breast or artificial nibble. So to, a robot will need to be motivated to some kind of action and guided by some basic programming until it grows in more and more complexity. I think, philosophically, that’s the tricky part. What need do we give a robot? Need for energy? Need for community? Hm… There’s a lot to think about here. I think there’s another post brewing. Moving on!
A slow convergence
Simply put, we may not be smart enough to create Artificial Intelligence… yet. Many people believe there is a paradox in creating Artificial Intelligence in that in order for the brain to understand itself, it needs to be smarter than itself. Well, that’s a problem, and if that’s true, we might not be able to functionally recreate intelligence. Unless, of course, we upgraded our brain.
We are already on the verge of transhumanism. In fact, it is hard to argue that we aren’t already in a transhuman state and it is impossible to say we aren’t heading in that direction. We are melding more and more with synthetic replacement parts from cybernetic arms to 3D printed skulls. Eventually, we’ll be able to upgrade our brains with more “RAM”, or “Hard Drive space” and so on. Two things can happen from this. 1) We will become intelligent enough to create an AI or 2) We’ll modify ourselves enough to the point that we will, ourselves, become a fully functional Artificial Intelligence.
How will we recognize Intelligence once it’s here?
Okay, so let’s say we did it. We gave birth to some digital masterpiece that can maneuver a room, carry on a conversation and annoy you by walking super slow in front of you on the sidewalk. Before we crack open the champagne bottles and spray every computer scientist we see in the face with cheap bubbly, we need to have some kind of measuring rod as to how to determine if the simulated consciousness really is intelligent. So far Alan Turing has my vote. This post is long enough and doesn’t need to have anything more than a reference to the Turing Test.
Suffice it to say (or don’t suffice it) that the idea of intelligence, especially mixed in a cocktail of the free-will debate, is a philosophic nightmare. We may never truly know if we’ve created Artificial Intelligence simply because there’s still a real argument out there whether or not we ourselves are intelligent.