Children, robots and parental role.pdf

(167 KB) Pobierz
Minds & Machines (2007) 17:273–286
DOI 10.1007/s11023-007-9069-z
Children, Robots and... the Parental Role
Colin T. A. Schmidt
Received: 18 January 2007 / Accepted: 10 July 2007 / Published online: 16 August 2007
Ó
Springer Science+Business Media B.V. 2007
ˆtre
Abstract
The
raison d’e
of this article is that many a spry-eyed analyst of the
works in intelligent computing and robotics fail to see the essential concerning
applications development, that of expressing their
ultimate
goal. Alternatively, they
fail to state it suitably for the lesser-informed public eye. The author does not claim
to be able to remedy this. Instead, the visionary investigation offered couples
learning and computing with other related fields as part of a larger spectre to fully
simulate people in their embodied image. For the first time, the social roles
attributed to the technical objects produced are questioned, and so with a humorous
illustration.
Keywords
Cognitive epistemology
Á
Communicative reciprocity
Á
Reductio ad absurdum
Á
Relation
Á
Variant machine intelligence
Preamble on the Role of Autonomous Technology in Society
Historically, society did not perceive or acknowledge the value of technical objects;
it was not until early in the 20th century that the feeling society had for such
products started to turn positive. The reason for the lateness of this attention given to
the technical objects
latu senso
resides in the semantics they carry, especially in
European societies: (1) hot, dirty and often infested factories; (2) sweat and hard
labour; (3) infernal cadences imposed by Tayloristic theories of the workplace; (4)
the image of members of the lower classes that traditionally work in these contexts;
and so on... Of the negative meaning technical products convey listed, the first two
components are basically due to the nature of the final product. Heavy objects,
C. T. A. Schmidt (&)
´
Le Mans University, 52 rue des Docteurs Calmette et Guerin, BP 2045,
Laval Cedex 09 53020, France
e-mail: Colin.Schmidt@univ-lemans.fr
123
274
C. T. A. Schmidt
machinery and of course the quantity of brute material necessary to build them. The
forth point is blurring out (or is gone, as in the New World). The third negative
component mentioned has not totally vanished, neo-taylorists —or other suitable
label for the ‘rigorous bosses’ of modern industry—respond present in today’s
society. The difference then, when one looks at the evolution of technical objects,
take for example evocative ones like simple automates one hundred years ago and
current-day machinery that have the gift of speech, learning, ‘‘reasoning’’, etc., is
that a lot of tasks have been automated and miniaturised. The image of the person
working day and night to design and built a machine that will have all the human
traits necessary for providing company to lonely individuals is much more bearable
and somewhat glamorous. Many fields necessitating artificial neural systems have
recently sprung up in this area (i.e. ‘‘service robotics’’, ‘‘autonomous agents’’ ...).
The age being modelled over time for discourse with old and new human-like
‘contraptions’ is as followed:
1.
1950s
A. Turing
2.
1960s–70s
J. Weizenbaum
3.1980s variousauthors
4.
1990s
R. Brooks team
5.
2000s
J. Zlatev
6.
2001
B. Scassellati
7.
2007–>?
Adult discourse
Adult/teenager discourse?
7–15-year-old
4–7-year-old
2-year-old
>1-year-old
>? months-old
It seems there is some regression here as scientists modify their expectations of
what the machine can do.
So although technical objects as we know them did get a weak start, with time we
have learned to procure a more ‘elegant’ place for them in our society and in the
scientific community. In fact, scientists may have got carried away with attributing
more or less social roles to artificially neural-rich machines. Does this mean they are
gaining autonomy thanks to Man? Is it possible for a technical object to gain
autonomy in the human sense of the word if it has been made by a human? To what
extent is the use of the word ‘‘autonomous’’ meant to be simply metaphorical?
1
Some questions of the philosophical sort are starting to be raised in Cognitive
Science, cognitive robotics, etc. In this sense, one could say that the year 2001 was a
‘‘fast year’’ for research in Robotics. In that year, J. Zlatev, author of an article in
Minds and Machines,
asks two highly pertinent questions for robotics: (1) If a robot
is able to participate in simple language games as adequately as a child, should we
concede that the robot handles true meaning? and (2) How would we go about
developing a robot which could possibly live up to a positive answer to the first
question? My approach is straightforward: (a) refute the first question, so as to (b) be
I treat these ‘loaded’ questions elsewhere with Kraemer,
cf.
Schmidt and Kraemer (2006).
Cf.
also:
Schmidt (2004).
1
123
Children, Robots and... the Parental Role
275
able to drop the last. This rhetorical statement is meant to draw attention to a
growing problem due to computation-related communities that valorise materialism
beyond necessity. As the author of the questions above inspires the Epigenetic
Robotics movement—movement for which it is preferable that a robot
learn
the
appropriate behaviour rather than simply having it
programmed in—,
it would seem
that what he meant by saying ‘‘developing a robot (able to participate in language
games)’’ is that learning is
the
most important aspect when it comes to integrating
intelligence ‘‘into matter’’. Another author named Milner, perhaps better known to
the readership of
Neuro-computing,
expresses the limits of betting heavily on the
learning explanation horse: ‘‘We should recognise that learning is a relatively recent
evolutionary development, and that most of the animal population, including some
of the most successful species, flourish with negligible capacity for individual
learning. Some build snares, for example, or communal dwellings that would tax the
ingenuity and skill of a human [...] Learning is not a substitute for innate behaviour;
it is an example of it. Learning is an evolutionary development that allows fine
tuning of a verycomplex piece of predominantly heritable machinery (Lashley 1947.
Tinbergen, 1951)’’.
2
It is possible that Computer Scientists and roboticists are over-
enthusiastic about learning as it is even more recent in their discipline.
I myself think the problem bringing up such a debate for computing, robotics, etc.
resides in the fact that a large part of intelligence cannot be situated in the matter of
the individual, particularly the dialogical components of it. I therefore argue in
favour of supporting another well-known sub-domain of AI/HCI/Robotics thought
in order to stimulate research in the artificial sciences based on the reality the
discursive aspects of the human mind brings forth (cf.
infra
Preamble). This reality
should have technological consequences. I am sure the reader will agree with me
that, after reading this article, some of the actions produced in recently-formed
scientific communities could be re-thought for the future.
Robotic Brains Under the Projectors
I would now like to speak about an issue that has a 50-year and more history in the
Sciences of the Artificial. Important research being carried out at top-notch
scientific institutions like MIT, Carnegie Mellon University and still yet many
others seem to be having difficulty with the mind-body problem in creating robots
that think. Weng et al. teamed up to confirm this in their
Science Magazine
article a
few years back (2001) with discussion on ‘‘autonomous mental development’’ that
was limited to brain and body building.
3
Whether their intention included outright
occultation of the
mind
or not,
reductionism
cannot account for mind as it cuts this
latter off from its socio-communicative dimension (i.e. relations with other minds),
the very features that make a mind a mind and not a brain
to state things in a
2
K.S. Lashley wrote about this in ‘‘Structural variation in the nervous system in relation to behaviour’’ in
Psychological Review, 54,
as did N. Tinbergen in
The Study of Instinct,
Oxford: Clarendon Press.
Cf.
the
sub-section entitled ‘‘The Evolution of Behaviour’’
in
Milner (1999, pp. 6–7).
Cf.
Weng et al. (2001, pp. 599–600).
3
123
276
C. T. A. Schmidt
‘folkish’ manner. A few months later in that same year, Brian Scassellati from Yale
University (at MIT AI Lab. at the time) used the following citation from Turing’s
famous article presumably in order to sum up his Doctoral Dissertation (first
citation, placed top centre-page, Chapter 1).
Instead of trying to produce a program to simulate the adult mind, why not
rather try to produce one which simulates the child’s? (A. Turing
1950,
p.
456).
4
I do not have the impression that exponential progress in the area of ‘‘humanoid
robotics’’ has overcome the philosophical hurdle to capture the
dialogical
essence of
mind that Turing himself was aware of fifty-five years ago. With his ‘‘embodied
theory of mind’’ Scassellati may have been referring to—or taking inspiration
from—works such as Jordan Zlatev’s 1997 well-written work on
Situated
Embodiment.
And could it be that Scassellati would agree with Milner’s limits
(cf.
supra)
on explaining human cognition with the concept of learning?
Whatever the relation, academics working in Robotics and related fields like
Human-Machine Interaction and Artificial Intelligence often seem to undergo an
out-of-proportion positivistic enthusiasm for their ‘babies’. Why is this? Don’t any
of them have the liberty to
really
express their doubts? There surely must be some
conceptual hesitation in their mind when the action implied by their work
constitutes replacing human beings. Fortunately, when they do replace a human
being with a machine, it is quite often in the context of repetitive task handling that
human beings no longer like to do. But there are a few academics that work on
challenges that remain purely technological in nature (i.e. not that useful since man
does not want to give up the action concerned—examples involving speaking come
to mind). Their technological audacity does not stem from usability reports or
interviews with users. Simply defying the laws of nature is what they seek to do.
Scassellati gets his expectations about
machine intentionality
the wrong way
around when he writes about the ‘‘Implications to Social Robotics’’ of his work:
‘‘Rather than requiring users to learn some esoteric and exact programming
language or interface, more and more systems are beginning to use the natural social
interfaces that people use with each other. People continuously use this extremely
rich and complex communication mechanism with seemingly little effort. The
desire to have technologies that are responsive to these same social cues will
continue to drive the development of systems [...] Theory of mind skills will be
central to any technology that interacts with people. People attribute beliefs, goals,
and desires to other agents so readily and naturally that it is extremely difficult for
them to interact without using these skills. They will expect technology to do the
same’’.
5
In fact, interlocutors in human-resembling communication like to be
reassured that their interlocutor is human. If one wishes to escape from the
Electrical Engineering and Computer Science point of view, one has to read for
example the works of Norman, a cognitivist who addressed the DARPA/NSF
Conference on Human-Robot Interaction in... yes, the year 2001. He then gave an
4
5
Cf.
Scassellati (2001).
Cf. ibidem
p. 159.
123
Children, Robots and... the Parental Role
277
analogy to persuade any human being to understand why
machine speech should not
be flawless
in the human sense.
6
And he is not the only one that argues this point on
the same basis (cf.
infra).
Brain-child projects are fine, but will they ever lead to a ‘‘mind-child’’? Perhaps
this term was never coined, in English anyway, because this notion is out of reach
(whereas that of brain-child is). We should look at the further specialised field of
Robotics and Computation. At least one influential author has caught my eye.
‘‘Artificial Problems’’
Some authors like to delve into ‘‘thought experiments’’ using ‘funny’ examples to
study the possibilities of resolving some problem in the Artificial Sciences. Let us
try to understand, in simple terms, what Zlatev meant in his (yes again!) 2001 article
in
Minds and Machines.
7
His goal was to use one of these ‘‘thought experiments’’ in
order to up-grade the position of the Artificial—robots—on the social status scale,
or perhaps quite possibly, to argument in favour of taking robotic technology even
further ahead. Or was it only to test the plausibility of lifting them up to our level?
In any event, he devises a fictive situation for this purpose. A two-year old child
is sitting on the floor and interacting with his father through eye contact as they pass
things likes balls and blocks back and forth. The child gestures towards an object
that is out of reach and says ‘‘train’’. Dad says ‘‘Oh, you want the
train-engine’’.
In
receiving it, the child repeats ‘‘train-engine’’, thereby indicating that the adult’s
slight correction concerning the proper term of reference has not passed unnoticed;
etc. (cf. p. 155). Zlatev then tells us that, when it comes to playing simple language
games like this, you can remove the two-year old and put a robot in the same spot on
the floor to occupy Dad; he says that today we can build a robot that would have the
same physical and intellectual capacities as this person’s son or daughter. I agree
with him so far.
My endeavour is to focus on the communication part of his proposal as I believe
this is where robotics would basically stand to gain the most from my critique.
As communication is a social activity that does not have anything really to do
with physical entities or genes themselves, I am sure the author pointed out here will
have no objection: he does in fact carry his point of view well outside of the
materialistic topics traditionally spoken about in robotics.
One does not really have to read beyond the Introduction of Zlatev’s rather
lengthy article to find out whether his ‘‘Epigenetic Robotics’’ will not be able to
6
After exposing a version of the Asimovian laws of robotics, he states the following: ‘‘while speech
input is still imperfect, the robot must make this clear [...]’’. He them gives the maxims the first of which
is: ‘‘Don’t have flawless, complex speech output at a level far more sophisticated than can be understood.
If the robot wants people to realise it has imperfect understanding of language, it should exhibit these
imperfections in the way it speaks. (If a foreign speaking person could speak fluent English but only
understand pidgin speech, the more it spoke flawlessly, the less other people would understand the need to
speak in pidgin)’’.
Cf.
Norman (2001).
7
Cf.
Zlatev (2001), ‘‘The epigenesis of meaning in human beings, and possibly in robots’’.
Minds and
Machines, 11,
Kluwer.
123
Zgłoś jeśli naruszono regulamin