Ethics of Robotics | xltronic messageboard
 
You are not logged in!

F.A.Q
Log in

Register
  
 
  
 
Now online (2)
big
recycle
...and 411 guests

Last 5 registered
Oplandisks
nothingstar
N_loop
yipe
foxtrotromeo

Browse members...
  
 
Members 8025
Messages 2614083
Today 3
Topics 127542
  
 
Messageboard index
Ethics of Robotics
 

offline EVOL from a long time ago on 2007-04-29 20:43 [#02077478]
Points: 4921 Status: Lurker



ok, i know everyone is having a good time discussing this
hypothetical situation of robots and autonomy and evolution
and consciousness, but alas i must interject this tiny fact
once more to be sure you're all still grounded... oil. it
takes millions of years to evolve, even if these "robots"
(hypothetically, of course) were to do so expoonentially, it
would have to be done, starting now, in less than 50 years.
since we've reached the point of "peak oil" already,
consumption and extraction have outpaced the production of
new oil reserves because it's a geological process which
takes millions of years. and so far, all the "alternatives"
to oil, use more energy from... duh, oil, to even
manufacture, than the amount of energy the alternative
produced with it, makes. right now the demand for
alternatives is not yet at the point to facilitate a need
for a larger infrastructure required to quell are insatiable
thirst for power. the likes of such infrastructure will
take several decades to even begin to match the point we are
at now with "crude" oil. even at that time, given the rate
of growth for demand, estimates on production will reach
98.3 million barrels a day by 2030, an increase of world
consumption by 25%. reserves that are declared by oil
producing nations has not changed (decreased, for that
matter) even with the continual rate of manufacturing so
many barrels every year, in order to keep up with the limits
placed by opec on the ratio of reserves to the number of
barrels that can be made. that way they wont lose the money
they have grown to depend on from their certain allotted
amounts of oil exports.

robots?

LOLOLOL!!!

more like, humans?

HAHAHA!!!


 

offline w M w from London (United Kingdom) on 2007-04-30 00:42 [#02077538]
Points: 21452 Status: Lurker | Followup to EVOL: #02077478



I've been reading about this some today. According to here:

LAZY_TITLE

'Coal is especially abundant and by itself can sustain the
current energy consumption of the entire planet for 600
years.'

Bill Joy says green technology will be the next opportunity
for google- like success.

Anyway, something significant is going to happen to
humanity relatively soon. It's not like we're in the
paleozoic age and have a huge glacial time period of
relatively static paleozoic ahead of us. One of the few
reasons I want to live is to see how fucked up everything
will get and how soon, so I can laugh at humanity and be
glad I didn't participate much in it.

There are other problems like how medical technology and our
world dominance have largely stopped the healthy process of
natural selection. The vast majority of genetic mutations
are for the worse, so if you inherit something like dwarfism
you will probably survive and be able to reproduce copies
that also have it. I don't mean to pick on anyone and there
is a vast number of other genetic problems, but the point is
the normal mutation process continues yet everything
(except immediately fatal) gets selected instead of the
traits that are fit for the environment. And on top of that
our environment is fucked to hell anyway. My brain feels
like I should be swinging from trees among a small group
population of 20 or something, not belonging in this
nightmare of a mess caused by humanity's attempt to cheat
nature. It was probably just inevitable.


 

offline goDel from ɐpʎǝx (Seychelles) on 2007-04-30 01:16 [#02077540]
Points: 10225 Status: Lurker | Followup to EVOL: #02077478



what makes you so sure those estimates are correct? i've had
the same discussion with people who'd worked at shell (on
interim basis, so they weren't tied to the company) and
according to them this problem is way more overhyped than it
should be. and come to think of it, it would be a pretty
easy way for the oil-business to keep the prices high.
furthermore, countries are buying oil like madmen. just in
case the resources do dry out, or more probable, when
there's a new crisis. so by the time we run out of
resources, there's probably already a bunch of smart droids
on the loose shooting our asses.


 

offline Drunken Mastah from OPPERKLASSESVIN!!! (Norway) on 2007-04-30 02:42 [#02077551]
Points: 35867 Status: Lurker | Show recordbag



Where do people get the idea that robots will have the
ability to evolve from? It really doesn't make much sense if
you think about it.


 

offline goDel from ɐpʎǝx (Seychelles) on 2007-04-30 03:17 [#02077556]
Points: 10225 Status: Lurker | Followup to Drunken Mastah: #02077551



Why not? If a robot could develop its own 'thoughts'.
Develop new theories. Why couldn't it evolve? Creating
better versions of itself.
And other the other hand there's something like artificial
life. To a certain extent there are already robots which are
evolving. Is the notion of evolving robots really that
farfetched?


 

offline w M w from London (United Kingdom) on 2007-04-30 03:35 [#02077558]
Points: 21452 Status: Lurker | Followup to Drunken Mastah: #02077551



I think so far it is based on memetics and is in its
infancy. Biological evolution is based on replicating genes,
and all the complexity of life came about as a side product
just because something had the property of replicating
(since mutations occur and more fit versions replicate
faster/more frequently/etc and so become more numerous
passing on those mutations).
The meme theory is that this eventually resulted in human
brains which paved the way for a new replicator, which is
information- in brains and things that brains build such as
computers/books/etc. There have been criticisms of it being
quite different from genetics but is only meant as an
analogy I think. Right now the next super computer will
probably be fairly similar to the last one, maybe because
it'd be too hard to drastically change the concept/idea of
it aside from a small mutation. Perhaps some completely
different design would be superior but we never discovered
it because we are evolving along this particular direction
in biomorph land.
Gene's environment is other genes so the ones that are
selected are ones that work together with others (sharp
teeth genes go with genes for stomache that can digest meat)
and their information codes for such physical structures
through embryonic development as I understand, to enhance
the survival of the genes. Memes don't seem to code for
stuff in an organized way like this, but maybe are currently
in an early stage of their evolution (they're already
replicating/ maybe just havn't built their phenotype yet).
All these humorous youtubes and stuff have high replication
success just because they entertain us; this lame reason
might result in the emergence of something that takes some
complex path of its own.
But if we build AI that passes us it could possibly build or
augment its own AI, possibly using evolution as a tool or
maybe even something we'd never understand that is superior
to evolution. Maybe it can just create exactly what it wants
from atoms or something.


 

offline Drunken Mastah from OPPERKLASSESVIN!!! (Norway) on 2007-04-30 03:41 [#02077560]
Points: 35867 Status: Lurker | Followup to goDel: #02077556 | Show recordbag



Eh.. why would it develop thoughts (that's part of the
question "why would it evolve," so the answer doesn't hold
up)? Do you think thoughts somehow automatically appear if
there's just enough information available?


 

offline redrum from the allman brothers band (Ireland) on 2007-04-30 03:50 [#02077562]
Points: 12878 Status: Addict



evol discovers the concept of peak oil. well done evol, well
done.


 

offline goDel from ɐpʎǝx (Seychelles) on 2007-04-30 04:03 [#02077570]
Points: 10225 Status: Lurker | Followup to Drunken Mastah: #02077560



Could you explain to me what you think thoughts are and why
an artificial mind wouldn't be able to produce them?
My point being: this is an area where there's a lot of
discission and hardly any hard evidence. I don't see why you
could be that certain as to whether or not things like this
could be possible. Are you god, or anything? Are you beyond
science?

And btw, to a certain extent a chess-computer is already
developing its own thoughts. Or any autonomous machine for
that matter. By definition, an autonomous machine is a
machine which adapts to its environment independently. You
may bring counterarguments using concepts like free-will,
consciuosness, qualia, intentionality and whatnot. But in
the end these concepts themselves are questionable. Even in
today's science. Which, i think, leaves more than enough
space to make it possible that in the future there will be a
possibility of robots developing their own thoughts.


 

offline Drunken Mastah from OPPERKLASSESVIN!!! (Norway) on 2007-04-30 04:30 [#02077577]
Points: 35867 Status: Lurker | Followup to goDel: #02077570 | Show recordbag



Answer my questions.

I have no definition for thought, I can just say that each
one of us knows what it is.

I'm not necessarily saying an artificial mind won't be able
to develop thoughts, I'm just saying there's no reason for
it to happen, and especially not the way people seem to
believe it will (a sudden development in some random robot
that starts building its own robots because its immediate
thoughts are about reproduction and world domination).
Robots will most likely continue to be purpose-built, and
that involves giving them a set of instructions and letting
them process signals (Chinese room, etc etc). If someone
were to build a robot with the specific purpose of making it
conscious, how would they go about this? Current robotics
development points to that the only way would be to "raise"
it like you would a child simply because of the undefinable
nature of consciousness and thought (and simpler things like
riding a bike).

A chess computer is calculating; It doesn't know what a rook
is, it just follows algorithms according to values fed to
it.

And, yes, I may indeed bring counterarguments using
consciousness and qualia and all the other things because
that is exactly what we're talking about here! If you're
removing those things from the discussion, there's nothing
more to talk about, not even the possibility of artificial
intelligence.


 

offline goDel from ɐpʎǝx (Seychelles) on 2007-04-30 04:53 [#02077581]
Points: 10225 Status: Lurker | Followup to Drunken Mastah: #02077577



What is there to answer?
There's no definition of thought, but we all 'know' what
they are (just like consciousness, qualia, etc). I'm not
removing those things from the argument, I'm just saying
that as long as these concepts are not clear, there can't be
any definate conclusion in any direction. And that this
leaves open the possibility of ai, etc. Which is my only
point. I don't know how. I don't know when. But as long as
these concepts are as open to discussion as they are now,
everything's possible.
You say a chess-computer only calculates (like the
chinese-room experiment). In what way doesn't the human mind
just calculate? You may 'know' it doesn't, but do you have
any proof? Or tangentially, is free-will really free-will
when a spike of activity can be measured before you're even
aware of having made a 'free' choice? We're not going to
find answers here, no matter how certain you are of how well
you know your own thinking.


 

offline Drunken Mastah from OPPERKLASSESVIN!!! (Norway) on 2007-04-30 05:07 [#02077583]
Points: 35867 Status: Lurker | Followup to goDel: #02077581 | Show recordbag



There is overwhelming evidence that the human mind doesn't
just calculate: We all have intimate experiences of it! When
I see something, I immediately know what it is and what I
can do with it. In a situation, I present options to myself
and deliberate about them. Deliberation may appear similar
to calculation, but if you just consider yourself first as
deliberating and then as calculating, you see the difference
(again to the Chinese room, just imagine someone inside that
knows how to do maths. He is fed equations that he solves
and puts out on the other side. Even if he actually
understands what he's doing, the calculations, that the
number 2 is 2, he still isn't really aware of what he is
actually calculating: the mathematical 2+2=4 is quite
abstract on its own; What things are there first 2 then 2
more of here?). Even some hardcore reductionist has to admit
that at least there is a significant
qualitative difference here (I also believe there is
a structural and intentional difference)!


 

offline goDel from ɐpʎǝx (Seychelles) on 2007-04-30 05:40 [#02077589]
Points: 10225 Status: Lurker | Followup to Drunken Mastah: #02077583



I don't see how "intimate experiences" can hold as a
scientific proof, leave alone "overwhelming evidence". But I
don't want to go into that discussion anyways.
Lets say you're right. There is a qualitative difference
between the way humans think and AI. Do you think that
implies that AI in the sense of The Matrix would be
impossible? Or specifically, that we can't develop AI which
is able to make political decisions. Does it better than we
do. And, finally, we could give the authority to actually
tell us what we should do.

Assume that we can actually make an AI which is able to make
political decisions, even though the qualitative difference
you're aiming at. Does the qualitative difference even
matter? The only thing that matters is the result. If AI
takes over our planet, who's going to care whether or not
they're conscious. Whether they feel like we do. Or whether
they're able to love.
Personally, i think the qualitative discussion is one for
philosophers who are walking behind the developments.
Thinking they can explain what it all means when in fact, it
already doesn't matter anymore. The consequences are already
there. The only meaning left is a historical one.
Philosophers should walk in front of developments.
Discussing the consequences of certain possibilities, not
'proving' whether or not something would be possible.
Philosophers can't prove anything. They can only argue.
Their task is to pave the way of what could be. Or in this
context, explain how we could deal with autonomous machines
which can make and break our lives.


 

offline Drunken Mastah from OPPERKLASSESVIN!!! (Norway) on 2007-04-30 06:04 [#02077592]
Points: 35867 Status: Lurker | Followup to goDel: #02077589 | Show recordbag



If intimate experiences aren't scientific proof, nothing is:
The intimate experience of yourself is where you start out.
Without this centre or nexus or whatever you want to call
it, you have nothing.

"Or specifically, that we can't develop AI which is able
to make political decisions. Does it better than we do. And,
finally, we could give the authority to actually tell us
what we should do.
"

If we manage to create true AI, it will be fallible; AI
requires "the third option," the "I don't know;"
Intelligence involves learning, and learning is done through
failure as well as through success. Calculated failure may
be helpful in certain cases ("how much pressure can the hull
of this ship take?"), but definitely not in all.

And, no. The result isn't all that matters. That's
probably one of the main points on which we differ. You
believed it would be sufficient if the perpetrator was
punished after the crime, I believe his internal motivations
and thoughts on what he did are paramount, and that the
pre-act projection of himself in the act including
all responsibility, all guilt, all those things, will be
preventive (unless we're dealing with a psychopath, but
pathological cases are exceptions, and most psychopaths
aren't born psychopaths, but rather made into psychopaths by
both themselves and invariably the people around them).


 

offline Drunken Mastah from OPPERKLASSESVIN!!! (Norway) on 2007-04-30 06:10 [#02077593]
Points: 35867 Status: Lurker | Followup to goDel: #02077589 | Show recordbag



Also, in all things, history is present in the present;
Issues believed to have been handled sufficiently in the
past invariably affects the present, and new developments
within "old" fields have effects on those fields currently
investigated. A thought doesn't necessarily even relate to
time.

Now, I've been writing on my paper all morning, it's time to
go outside in the sun and buy some vinyl.


 

offline goDel from ɐpʎǝx (Seychelles) on 2007-04-30 06:39 [#02077605]
Points: 10225 Status: Lurker | Followup to Drunken Mastah: #02077592



On "intimate experiences":
Anyone can have pretty intimate experiences when he's or
she's on drugs. That doesn't make them true (yeah, it's a
cheap argument, i know).

But moreover, scientific proofs are about repeatable
results. Intimate experiences are, by definition, not
repeatable. Sure, science starts with experiences (in the
Husserlian sense). But that doesn't imply that the
qualitative content of experiences themselves can actually
count as scientific proof. If it was, we'd live on a flat
world again, where the sun turns around the flat disc we're
living on. And our "intimate experiences" - or overwhelming
evidence would be the scientific proof. The whole point of
science is to marginalise our intimate experiences in the
process of explaining the world we live in.

On "the result isn't all that matters":

Sure, the result isn't all that matters. But, as i tried to
explain earlier, things like intentionality are things we
can't control. I can tell you I agree with you, but there is
no way you can be certain I actually mean it. I can convince
you, but you still couldn't be certain (not unless those
intimate experiences count as overwhelming and scientific
evidence). This is a problem we cannot overcome and that's
why, i think, we have to conclude that some things are
beyond our control and laws are the -pragmatic- solution to
this dilemma.


 

offline rockenjohnny from champagne socialism (Australia) on 2007-04-30 07:43 [#02077614]
Points: 7983 Status: Lurker



we dont need to look into this too deeply. arming a robotic
sentry would be the same as setting any other kind of trap
designed to harm another human being. it is a bad action
designed with bad intention.


 

offline Drunken Mastah from OPPERKLASSESVIN!!! (Norway) on 2007-04-30 09:56 [#02077654]
Points: 35867 Status: Lurker | Followup to goDel: #02077605 | Show recordbag



The effect of drugs has no particular effect on the
argument: Drug perception isn't normal experience, and the
starting point for all experiences is invariably normal
experience; It is from normal experience we believe we can
tell the difference (both neurologically and otherwise)
between a person on drugs and a person not on drugs. Also,
as normal experience is the starting point, we use that as
reference point for what's "real": "Did I dream that up, or
was it real? I better go check!"

Intimate experiences are definitely repeatable, if not
continual! It may show itself differently, but you know of
the identity flowing through all your experiences of being
in love, for instance; Each one is your love. These intimate
experiences are also experiences you have a sort of
immediate access to; You don't need to experience it through
some other thing, but instead it is directly given to you,
even more directly than things that are in front of you,
that you can see and touch. Of course knowledge about the
thing you're experiencing can be expanded upon via other
ways of seeing it (natural science; flat vs round earth),
but it still remains the fact that this has been experienced
by someone, someone you can choose to believe or not to
believe. It also remains a fact that reducing mental
phenomena to physical properties is bullshit (Of course,
it's beneficial for us to study neurology, but if a thought
is explained to me as a neurological firing, all meaning is
lost unless the description neurological firing has the
linguistic function of the words previously used to describe
it ("+" means "plus")); natural science isn't the only
science (social sciences, human sciences, etc), and natural
science can't explain concepts, meanings, etc, whatever you
want to call it, and the fact that the other sciences work
in different ways from natural sciences doesn't make their
theories any less scientific. If so, the metatheories
(evolution, for instance) of natural sciences would all be
non-scientific.


 

offline Drunken Mastah from OPPERKLASSESVIN!!! (Norway) on 2007-04-30 10:00 [#02077656]
Points: 35867 Status: Lurker | Followup to goDel: #02077605 | Show recordbag



Yeah, there's no way for me to be certain, but since this is
on ethics and morals, normative things, I still think one of
the most important discussions would be about the attitudes
and intentions of those involved and about enlightenment
(making sure people actually understand their
responsibility). Ethical issues are not, they
should be, so an ethical discussion won't be about
how it is, but about how people should act, and if it is
about how it is, it is this in a purely critical way, in
that it will be a critique (or applauding) of current
affairs, usually aiming at a should.


 

offline Drunken Mastah from OPPERKLASSESVIN!!! (Norway) on 2007-04-30 10:04 [#02077658]
Points: 35867 Status: Lurker | Followup to rockenjohnny: #02077614 | Show recordbag



Would it? That's not necessarily how the people doing it are
seeing it, nor how the Others are necessarily seeing it. A
bit of a silly example, but in robocop, either the series or
the movies, I can't remember, when robocop "goes berserk"
(of course, he had a good reason to), the police department
treat him as a criminal, not as something deployed by them.
If the robot is considered a moral agent, you run into a
whole lot of problems. I find it less likely that anyone
will call a mine a moral agent, or that someone would blame
the kid for having been so stupid, stepping on a mine.


 

offline w M w from London (United Kingdom) on 2007-04-30 12:18 [#02077694]
Points: 21452 Status: Lurker



The illusion of self in a human mind is just a biologically
evolved computer running in parallel as opposed to largely
sequential computers. 'The meme machine' paints a brainfuck
of a picture about how a 'selfplex' is just an accumulated
'story' of memes relating to a brain and its body.
I don't think there's anything magic in human thought. When
we see a mask from the back side (inverted face) it pops out
as a normal 3d face just as if one of the optimizers of the
information fail in that instance or something.


 


Messageboard index