Artificial Intelligence

Moravec on Universal Robots

Universal Turing Machines
Alan Turing (1912-1954), one of the fathers of computer science and a major figure in cryptography during World War II, proposed the idea of a Turing Machine, a machine that could be used to perform a finite mathematical operation.  A Turing Machine was to be quite simple mechanically.  It would consist of a machine that could read and write a finite set of symbols upon an indefinitely long tape that was divided up into cells.  Depending on the symbol the machine read at a particular instant and the state the machine was in when it read the symbol, the machine would write some predefined symbol on the tape, advance the tape forwards or backwards and (possibly) change the state the machine was in.

Turing generalized this idea into the Universal Turing Machine, a machine that could be made to function like literally any particular Turing Machine.  A Universal Turing Machine could be used to perform literally any finite mathematical operation..

This idea provided a theoretical background for work in Computer Science and Artificial Intelligence.  It also lies behind a currently influential idea about what it is to be intelligent.  We'll come back to this idea shortly.

Universal Robots
Moravec draws on the idea of a Universal Turing Machine to propose the idea of a Universal Robot, i.e., one which could be used to perform any humanly (physically?) performable task  The contrast here is with robots that are designed to perform only a very narrow range of tasks.

Moravec thinks we are on the road to making such machines.  He suggests that robots that can reason as well or better than humans will exist by about 2040.

First-Generation Universal Robots:  2010 - 3,000 MIPS

Lizard-scale intelligence

These robots will be capable of basic 'perception' and manipulation of their surroundings.

"able to converse and read" in a basic sense (235)

"... may find their very first uses in factories, warehouses, and offices." (235)

"basic housecleaning." (235)

Second-Generation Universal Robots:  2020 - 100,000 MIPS
Mouse-scale intelligence

Capable of "adaptive learning" and "statistical learning" (236)

"will occasionally be trained by humans, but more often will simply learn from their experiences." (236)

"While a first-generation robot's personality is determined entirely by the sequence of operations in the application program it runs at the moment, a second-generation robot's character is more a product of the suite of conditioning modules it hosts." (237)

Third-Generation Universal Robots: 2030 - 3,000,000 MIPS
Monkey-scale intelligence

"will learn much faster because they do much trial and error in fast simulation rather than slow and dangerous physicality." (238)

This will be possible because the 3rd Generation Robots will be able to engage in world modeling.

"the robot can preview what it is about to do, in time to alter its intent if the simulation predicts it will turn out badly -- a kind of consciousness." (239)

Note the claim about consciousness.  Is it accurate?
Of course, this will require a well developed ability to 'model' the world.  This may require some time to be set aside for learning when a robot is placed into an unfamiliar environment.
"third generation robots will require noncritical 'play' periods wherein things are handled, spaces explored, and minor activities attempted, simply to tune up the simulation." (239)
Fourth-Generation Universal Robots: 2040 - 100,000,000 MIPS
Human-scale intelligence

Progress in Artificial Intelligence:  While generations 1-3 are being developed, "the conventional Artificial Intelligence industry will be perfecting the mechanization of reasoning.  Since today's programs already match human beings in some areas, those of forty years from now, running on computers a million times as fast as today's should be quite superhuman." (240)

This assumes that we'll find a solution to the frame problem, i.e., the problem of modeling change in a complex world.  This is currently a serious problem in artificial intelligence.  Anything that happens in the world has indefinitely many effects, but not all are relevant to what we want to do.  How do we decide which consequences of what happens in the world we should pay attention to?
"fourth-generation robots will have the general competence of human beings ... As they design their own successors, the world will become ever stranger." (241)
What is a mind?
The Turing Test Answer
Suppose everything unfolds as Moravec suggests.  If so, by 2040, the world will contain machines  that can perform virtually any task we can.  A number of related philosophical questions arise at this point.
Are these fourth-generation robots intelligent?
Do they have minds?
Are they conscious?
Alan Turing suggested one very influential way of answering this question.
The Turing Test:  If a machine cannot be distinguished from a human being in a 'blind' test, then it is intelligent/has a mind/is conscious.
Note:  this somewhat misrepresents Turing's original presentation of the test, but it is the way in which the test is generally thought of today.
Is this the right way of answering this question?
*************************************

Some Theories of Mind

An area of philosophy known as Philosophy of Mind tries, among other things, to answer the question of what it is to have a mind.
Dualism: A Traditional Theory of Mind
The ‘traditional’ answer to the question has been one that takes the position known as dualism.  It's often associated with Rene Descartes (1596-1650), although many other thinkers have endorsed it at one time or another.

On this view, mind and body are distinct.  There are two sorts of things in the world (that's why the position is known as dualism):  mind and body.  The mind is a non-physical thing which nonetheless manages to be related to the body somehow.

Some advantages of this theory
(1) It explains how life after death is possible.

(2) It makes sense of the idea that there must be something more to us than just the physical stuff we are made of.

Some problems facing this theory
(1) It seems to make the literal creation of Artificial Intelligence impossible.  How are we supposed to create a non-physical thing?
Should the Dualist's response be:  'that's right, creating true artificial intelligence is impossible'?
(2) Why believe in mental stuff?  Why should we think there are such things as non-physical souls?  What kind of evidence could there be for them?

(3) The Mind/Body Problem:  It seems as though the mind and body interact.  For example, I feel pain when something happens to my body.  But how is that possible on this view? If the body is a physical thing and the mind is a non-physical thing, how can such interaction take place?

Descartes’ answer: ‘signals’ get sent from body to brain and back by means of the pineal gland.

Is this a convincing response to this problem?

*******************************

Type-identity Theory

Mental states are just brain states (or at least states of the body).
Particular type of mental state = particular type of physical state.  (e.g., believing it’s Thursday = neurons firing in region X of the brain).
An Advantage of this Theory
It solves the Mind/Body Problem.  Effectively, your mind is a part of your body, so it's not at all mysterious how mind and body can interact.
Some Problems Facing this Theory
(1) If mental states are physical states, they have physical properties such as size, colour, shape, etc.  How can that be?  (E.g., How can it make sense to say that my belief in the existence of the Easter Bunny is pale gray and weighs ½ ounces).

(2) Thoughts are about something, brain states aren’t about anything.  How can they be the same thing?

(3) Mental states feel a certain way, but brain states don’t.

Is the answer to problems 1-3 simply that we must change the way we think about what mental states are?
(4) Why think only a particular kind of physical stuff can produce consciousness?  If a robot is made out of different stuff than us, does that mean it can’t have a mind?
Functionalism offers a non-Dualist solution to this problem.
****************************************

Functionalism

Functionalism defines mental states in terms of input, output and relations to other mental states.  That is, it defines a particular mental state in terms of what happens as a result of a particular kind of 'input' received by a person in a particular mental state.  That is, it defines the mental state in terms of what the person will do and what changes in his mental states will occur when he receives that input.

The important point here is that, rather than defining the mind in terms of what it’s made of, functionalism defines it in terms of what it does.

This is a popular theory of mind amongst those who hope to literally create artificial intelligence.

It's also the theory of mind that seems to lie behind the Turing Test.

An Example:  A functional definition of a Coke machine

Suppose Coke costs 10 cents and that the only possible inputs for our machine are nickels, dimes, quarters and loonies.

We can think of the machine as always being in one of two possible states.

State 1 = wanting 10 cents
State 2 = wanting 5 cents
Here’s a functional definition of being in state 1 (i.e., a functional definition of wanting ten cents).  The machine is in state 1 if and only if it is such that:
(1) if the machine receives a nickel, it will go into state 2
(2) if the machine receives a dime, it will give out a Coke and stay in state 1
(3) if the machine receives more than a dime, it will give out a Coke along with the input minus 10 cents and stay in state 1
What does it mean to be in state 2?  A thing is in state two if and only if it is such that:
(1) if the machine receives a nickel, it will give out a Coke and go into state 1
(2) if the machine receives a dime, it will give out a Coke along with five cents and go into state 1
(2) if the machine receives more than a dime, it will give out a Coke along with the input minus five cents and go into state 1
Notice that this definition of a coke machine doesn't say anything about what the machine must be made out of.  Anything which functions like this is a Coke machine, according to the definition.
The Big Objection to Functionalism:  Isn’t it possible for a thing to function in the right way but still not have a mind?
 
*********************************

What's the correct theory of mind?

[Philosophy 2801]