Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Friday, December 9, 2011

367: The Connectionist Brain

A last look at the mind as a computer. In the previous lecture we discussed the Computational Theory of Mind, the theory also known as computationalism.

The basic idea is, of course, that a computer apparently seems to operate exactly like the brain and thence the computationalists focused on the reasoning capability of the mind.

Allow me to oversimplify, for this can become very technical otherwise. The computationalists saw the mind as using a kind of language of thought with its own rules and syntax.

However, there were others who had a closer look at the brain itself. What they saw was not some logic machine, but networks of neurons and synapses. Whole networks of them.

In the picure behind me you see a schematic representation of a neural network. As I said , I simplify, but you could read it as such. When you look at all the lines you may understand why this theory is called connectionism.

You get a multitude of input, this is evaluated in that 'hidden layer' and leads to an output. Example: the sonar of a submarine tries to identify mines underwater.

The sonar receives a sound (Is if from a rock or a mine?), a spectrum of frequencies. All the nodes in the hidden layer have learnt what frequency what mean and all hidden layer nodes inform the output nodes about their findings.

And when you take all these output data together you get the answer: "Sorry guys, that is a rock." Such a neural network doesn't know that by itself. You tell it, these are the frequencies, you receive when it is a mine, That has to be your output.

It gets an sonar input to practice with. It knows to what output it should lead and then it starts adjusting the settings in the hidden layer in such a way that it finally obtains the desired output.

It may sound like a simple process but it isn't. To "teach" a neural network (I mean a computer), it can needs hundreds of thousands of trails for learning a simple task. It learns by trail and error.

Is that similar to the mind? We sometimes only need a few trails to learn new things. Young children seem to master new words every two hours, where a neural network needs millions of trails.

Advocats of connectionism often emphasize that digital computers are poor at perceptual recognition but amazingly good at mathematical tasks and data crunching.

In other words,whilst connectionist networks are good at what we are good at and bad at what we are bad at, digital computers are bad at what we are good at and good at what we are bad at!

This is taken as evidence for connectionism and against the computational theory of mind . A nice try, but there are still so many problems left, that I think, that computers are just simplistic representations of some of our functions of the mind.

Take for instance rationality, logic. For example, there is a causal relationship between my thought (mental state) that Mr. X is dumb and my thought that someone is dumb.

The first thought caused the second, and there is a rational relationship between'Mr. X is dumb' and 'Someone is dumb'. Matter of simple logic.

Take this argument, for example: If Joann dyes her hair, John will laugh. Joann dyed her hair, therefore John laughed. A valid reasoning.

Now this one; If Joann dyes her hair, John will laugh. Joann did not dye her hair. John did not laugh. Hold on, not true, John did laugh. Oh sorry. Yes because Joann suggested to dye her hair. Therefore an invalid reasoning.

We need not much training to understand this simple logic. An experiment by Bechtel and Abrahamsen in 1991showed that a neural network could discover the difference between a valid and invalid reasoning.

However, we need to proceed with caution. To begin with, the neural network needed over half a million training trials to obtain an accuracy of only 76 percent.

Even with a further two million training trials the network was still only 84 percent accurate. This is hardly a triumphant result.

My conclusion? Despite all exciting futuristic science fiction computers from HAL to Data, never expect to find a mind in your computer, as the machine is defined as it is now.


The Discussion

[13:31] herman Bergson: Thank you.... ㋡
[13:31] Lizzy Pleides: Thank you Herman!
[13:31] herman Bergson: The floor is yours..
[[13:32] herman Bergson: I guess you all have to reset? ㋡
[13:33] Mistyowl Warrhol: LOL, resetting process starting.
[13:33] Sybyle Perdide: maybe overloaded
[13:33] Lizzy Pleides: yes it was a lot of information
[13:33] Bejiita Imako: reebooting my saiyan drives
[13:33] Bejiita Imako: hehe
[13:33] Bejiita Imako: but i think i got the most
[13:33] herman Bergson: take your time to reread it...
[13:34] herman Bergson: we'll wait a few minutes
[13:34] Bejiita Imako: seems fairly logical, that even after so many tries it cant get better results
[13:34] Mick Nerido: I have to go rescue a bird
[13:34] Mick Nerido: Bye
[13:34] Sybyle Perdide: may I ask..
[13:34] Bejiita Imako: the computer just program itself and then fiollows this new instructions but still don't understand what its actually doing
[13:34] herman Bergson: ok Mick
[13:34] Bejiita Imako: like we do
[13:34] Mistyowl Warrhol: So we have the best computers already.. and the best brains?
[13:35] Lizzy Pleides: both can improve i guess
[13:35] :: Beertje :: (beertje.beaumont): we have the best computers till now...they are going to be better and better..
[13:35] Sybyle Perdide: you told us, a digital system is not good enough for a
[13:35] herman Bergson: Our brains are better than computers....or better..our mind is
[13:36] herman Bergson: There was the believe to create artificial intelligence...
[13:36] Sybyle Perdide: the trails
[13:36] Bejiita Imako: yes cause we understand what we are doing, computers are much faster but they cant understand at all what they do and thus "training " a computer is very hard
[13:37] Bejiita Imako: cause as i said it cant understand at all what it actually does
[13:37] herman Bergson: and to some extend it was achieved...but not in the way our mind works, but how computers work
[13:37] Mistyowl Warrhol: Maybe what I mean is ...our brains and computers are basic.. It is how we learn to use them in the future that will be improved.
[13:37] :: Beertje :: (beertje.beaumont): why do we want a computer that works like our brain?
[13:37] herman Bergson: and what these AI computers do LOOKs like how our mind works...
[13:37] Lizzy Pleides: the computer is a tool i think
[13:38] Sybyle Perdide: good question Beertje
[13:38] Bejiita Imako: yes
[13:38] herman Bergson: but only in limited areas
[13:38] herman Bergson: Ok Beertje...
[13:38] Mistyowl Warrhol: I think our goal should be a melding of brain and computer.. When our brains can effect a computer directly.. and get feed back from the computer.
[13:38] herman Bergson: What we want is to understand the mind....
[13:39] :: Beertje :: (beertje.beaumont): yes..but why do we want a computer that works like our brain?
[13:39] :: Beertje :: (beertje.beaumont): what is the use of that?
[13:39] herman Bergson: when we can recreate in a computer what a mind does....it leads to some understanding
[13:39] Mistyowl Warrhol: Not like a computer, but with a computer.
[13:40] herman Bergson: well...to take it into absurdum....
[13:40] Mistyowl Warrhol: Example, computer for stroke patients. in which the eye looks at the screen and makes changes.
[13:40] Bejiita Imako: a yes and you blink to click
[13:40] Bejiita Imako: i've seen a such one
[13:40] herman Bergson: if we create a computer that can take over all our htinking....we could go on vacation....
[13:41] Bejiita Imako: haha but how fun would that be after a while
[13:41] Mistyowl Warrhol: We would not need a keyboard, just our brain to operate one.
[13:41] Bejiita Imako: lol
[13:41] herman Bergson: in a way that is the basic idea behind those movies
[13:41] Bejiita Imako: aaa
[13:41] herman Bergson: cyborgs...
[13:41] herman Bergson: the ones with Schwarzengegger...
[13:41] herman Bergson: ah
[13:41] Bejiita Imako: but remember how hard it is to even make a computer operate on speech
[13:41] Mistyowl Warrhol: Naww, I want one to work along side my brain..
[13:41] herman Bergson: Terminator
[13:41] :: Beertje :: (beertje.beaumont): i don't want such a computer..i'd rather think for myself
[13:41] Bejiita Imako: i saw they did it successfully in 84 even
[13:42] herman Bergson: Yes Beertje..that is the battle in Terminator...
[13:42] Sybyle Perdide: our mind is not rational all the time, so a computer who had a mind, would be so too?
[13:42] Bejiita Imako: but still i cant get my machine to understand much of what i say with different programs
[13:42] :: Beertje :: (beertje.beaumont): never seen the Terminator
[13:42] Bejiita Imako: and now its 2011
[13:42] herman Bergson: Good point Sybyle....
[13:42] Sybyle Perdide: and if so, a computer is a logical working machine.. so it would get into trouble with itself
[13:42] Sybyle Perdide: and would never be able to be like us
[13:42] herman Bergson: that is the quintessential point of my doubts about all beliefs in Artificial intelligence
[13:43] Mistyowl Warrhol: well, then it is like our brains.. our brains get into trouble all the time.
[13:43] Mistyowl Warrhol: Ok, mine does anyway.
[13:44] Lizzy Pleides: some people don't have that, lol
[13:44] Bejiita Imako: hahaha
[13:44] herman Bergson: It is interesting to see how cognitive scinece canmodel parts of our mind into computer models..
[13:44] herman Bergson: but it is only a small part of the mind
[13:44] Mistyowl Warrhol: But think of a computer, that could work with a brain, helping paraplegics to walk again.. People with brain damage to relearn..
[13:45] herman Bergson: what about desires, expectations, feelings, emotions, needs, despair?
[13:45] herman Bergson: In fact...Artificial intelligence already gives us a clue...
[13:45] Sybyle Perdide: oh goddess, I have enough despair for my own
[13:45] herman Bergson: computers are related to intelligent behavior..
[13:46] herman Bergson: and indeed...computers can display intelligent behavior
[13:46] Bejiita Imako: a computer can be made to act as if it feels when given an input but still it stritly then only follows dumb instructions exactly how it should respond
[13:46] Bejiita Imako: only what we have told it
[13:46] Bejiita Imako: and it cant understand or feel them
[13:46] herman Bergson: no...
[13:47] herman Bergson: Our next station is the phenomenon of consciousness....
[13:47] herman Bergson: a mental state not a ingle computer has achieved except in Science fiction movies
[13:48] herman Bergson: That is the hard part for all theories of mind....
[13:48] :: Beertje :: (beertje.beaumont): the hard part is yet to come..?
[13:48] herman Bergson: oh yes Beertje....
[13:48] Mistyowl Warrhol: yes, staying conscious :-)
[13:48] :: Beertje :: (beertje.beaumont): omg..
[13:48] herman Bergson: smiles
[13:49] herman Bergson: Mick is trying to save a bird while he looks like a dead bird himself :-)
[13:49] herman Bergson: We have seen all attempts to formulate a thery of mind now
[13:49] Mistyowl Warrhol: Mick is unconscious on here so he can be consious in RL
[13:50] herman Bergson: from dualism to connectionism...
[13:50] herman Bergson: and all can't explain consciousness...
[13:50] herman Bergson: the first person experience we have of our selves
[13:51] herman Bergson: That will be our final chapter of this project
[13:52] herman Bergson: the quintessential question is: Are we our brain?
[13:52] Qwark Allen: ::::::::: * E * X * C * E * L * L * E * N * T * ::::::::::
[13:52] Lizzy Pleides: and if we are not, ... what are we?
[13:52] herman Bergson: or is the mind something more than just the working of the brain
[13:53] Qwark Allen: i think we answer that question weeks ago
[13:53] herman Bergson: Yes Lizzy...indeed
[13:53] Mistyowl Warrhol: Or is the mind physical or something else. The brain being a vessel.
[13:53] herman Bergson: In what way Qwark?
[13:53] Qwark Allen: that our mind is our brain
[13:53] :: Beertje :: (beertje.beaumont): wb Mick
[13:53] herman Bergson: yes..but in what way...
[13:54] :: Beertje :: (beertje.beaumont): is the bird saved?
[13:54] Qwark Allen: when the brain is damaged, there is no mind
[13:54] herman Bergson: true...
[13:54] Qwark Allen: look at alzheimer patients
[13:54] Mick Nerido: No some people scared it away
[13:54] :: Beertje :: (beertje.beaumont): is that really true?
[13:54] Qwark Allen: they die completely oblivious to what is around them
[13:54] herman Bergson: yes ..all true Qwark
[13:54] Lizzy Pleides: alzheimer, No. they still have a personality
[13:54] Qwark Allen: they even don`t know how to eat
[13:54] Mistyowl Warrhol: The data is there, just the brain can not reach it to process it.
[13:55] Qwark Allen: in the last stages, they lost all capacities
[13:55] Qwark Allen: all
[13:55] :: Beertje :: (beertje.beaumont): but they still have a mind
[13:55] Mistyowl Warrhol: Can we teach the rest of the brain to take over control for damaged parts..
[13:55] Qwark Allen: the moments of lucid are so rare, that at a point there are no more lucid ones
[13:55] Mistyowl Warrhol: That is possible in small children.
[13:55] herman Bergson: yes Qwark...
[13:56] Qwark Allen: to see that , we are our brain, we got to see, the ones with damaged brain
[13:56] herman Bergson: and the only cause is the breakdown of the brain...
[13:56] herman Bergson: they even can point at the proteins that cause it..or the lack of those
[13:56] Qwark Allen: in alzheimer, the neurons are substituted by aluminum plates
[13:57] Qwark Allen: heehhe in a joke, we can say, in the end we can recycle them
[13:57] Lizzy Pleides: when there is an interaction between the brain and another structure and the brain don't work anymore
[13:57] herman Bergson: And Beertje said..they still have a mind....
[13:57] herman Bergson: and that is true too
[13:58] Qwark Allen: i have to go, was really nice lecture herman, one more step to realize what are we, and where are we going
[13:58] Mistyowl Warrhol: Just need to find a way to get around the block.
[13:58] Bejiita Imako: aaa yes ㋡
[13:58] herman Bergson: yes Qwark...
[13:58] Lizzy Pleides: TC Qwark
[13:58] Sybyle Perdide: bye Qwark
[13:58] Bejiita Imako: really great once again
[13:58] Mistyowl Warrhol: TC Qwark.. Tell Gemma hi and give her a hug plz
[13:58] Bejiita Imako: ㋡
[13:58] Qwark Allen: i think in the end, we`ll be half human, half computer
[13:58] Qwark Allen: °͜° l ☺ ☻ ☺ l °͜°
[13:58] Qwark Allen: lol
[13:58] Bejiita Imako: hahah ok
[13:59] Qwark Allen: ok hun
[13:59] Bejiita Imako: cyborgs
[13:59] herman Bergson: Resistance is futile..........
[13:59] Qwark Allen: something like that
[13:59] Qwark Allen: ¸¸.☆´ ¯¨☆.¸¸`☆** **☆´ ¸¸.☆¨¯`☆ H E R MA N ☆´ ¯¨☆.¸¸`☆** **☆´ ¸¸.☆¨¯`
[13:59] Qwark Allen: ahahahh lol
[13:59] Bejiita Imako: hehe was just thinking about the BORG
[13:59] Bejiita Imako: you will be assimilated, resistance is futile!
[13:59] Mistyowl Warrhol: I just want to see through the universe..
[13:59] herman Bergson: We all belong in the hyve..
[13:59] Bejiita Imako: ㋡
[14:00] herman Bergson: But there is one problem....
[14:00] herman Bergson: philosophically...
[14:00] herman Bergson: also with the BORG...
[14:00] Bejiita Imako: but the borg seems to be more machines then intelligent beings
[14:00] herman Bergson: They had that Queen..she had a MIND of her own????!!!!!!!
[14:00] Mistyowl Warrhol: NO comment !!!!
[14:00] herman Bergson: so why was HER mind different from the borg mind???
[14:01] herman Bergson: How could that be?
[14:01] Bejiita Imako: and 7 of 9 too
[14:01] herman Bergson: She had a mind filled with desires and goals
[14:01] Mistyowl Warrhol: She was a woman.. and her mind was more complex for them to grasp..
[14:01] Bejiita Imako: ah
[14:01] herman Bergson: no...7 of 9 was just released from the bog and regained her human mind
[14:02] Bejiita Imako: aa yes she readapted to her usual self
[14:02] Bejiita Imako: thats how it was
[14:02] herman Bergson: yes ..
[14:02] Bejiita Imako: all borg implants was removed sort of
[14:02] Bejiita Imako: so she became human again
[14:02] Bejiita Imako: human
[14:02] herman Bergson: The philosophical problem is the Queen of the Borg Hyve...
[14:02] Bejiita Imako: cause in the beginning she wasn't cooperative at all
[14:03] Bejiita Imako: aaa the collective mind
[14:03] herman Bergson: She was as human as every individual human mind
[14:03] Bejiita Imako: and the queen like the master cpu with the borg as slave machines or clients
[14:03] Bejiita Imako: all thinking like a grid
[14:04] herman Bergson: no Bejiita..that queen had a will of her own...and the hyve just had to follow her will
[14:04] Bejiita Imako: yes she has but the rest are like one big mind taking instructions from her
[14:04] Mistyowl Warrhol: She took in input and reprocessed it out.
[14:05] Bejiita Imako: its a bit like the LHC grid at cern, takes instructions from an operator
[14:05] Bejiita Imako: then use millions of computers to act like one big supercomputer
[14:05] Bejiita Imako: a collective mind
[14:05] Mistyowl Warrhol: Ok, bit my tongue long enough.. The rest were just not wired the same.
[14:06] herman Bergson: Again Bejiita.....the philosophical problem in the Borg issue is that that Queen had an individual mind....where did it come from ..where was it going to?
[14:06] Bejiita Imako: aa indeed
[14:06] herman Bergson: ok...
[14:07] herman Bergson: Resistance is futile..next class is nextThursday.
[14:07] herman Bergson: class dismissed
[14:07] Bejiita Imako: ㋡
[14:07] herman Bergson: and Thank you all :-)
[14:07] Bejiita Imako: this is awesome
[14:07] Bejiita Imako: ㋡
[14:07] Bejiita Imako: thx herman
[14:07] Sybyle Perdide: it was really great, Herman
[14:07] Mistyowl Warrhol: Very good class. much to think about.. Ty human, Herman
[14:07] Rodney Handrick: thanks Herman
[14:07] herman Bergson: thank you Sybyle ㋡
[14:07] Bejiita Imako: o time for Qs party
[14:08] Bejiita Imako: co soon all
[14:08] Bejiita Imako: hugs
[14:08] Bejiita Imako: cu
[14:08] herman Bergson: have fun Bejiita
[14:08] Bejiita Imako: ㋡
[14:08] Bejiita Imako: i will
[14:08] Mistyowl Warrhol: yes, just got my tp, but need to do something first.
[14:08] Mick Nerido: Thanks Herman
[14:08] :: Beertje :: (beertje.beaumont): thank you Herman:)
[14:08] Mick Nerido: I will read this later
[14:08] herman Bergson: did you save the bird Mick?
[14:09] Mick Nerido: It go away
[14:09] Sybyle Perdide: bye Misty
[14:09] Mick Nerido: Maybe tomorrow it will come back
[14:10] herman Bergson: Bye Misty...
Enhanced by Zemanta

366: The Brain and Artificial Intelligence

Like science is eager to discover the secret of life, so are the supporters of the Computational Theory of Mind eager to find artificial intelligence in their computer.


Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it.
AI textbooks define the field as "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.

John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines."

John Searle coined the difference between STRONG AI and WEAK AI. Weak AI is the kind of behavior of computers, as if they seem intelligent. Well, maybe you can say, that they behave intelligent…matter of definition, I would say.

Maybe you have tried to chat with Elbot or Eliza. They can give you the impression of being understanding and intelligent. However they aren't. They just shuffle symbols not understanding a single word you type.

Yet this weak artificial intelligence is used nowadays in many situation. It emulates, what our mind does. You even find it in modern cars with its sensors.

But from the 1950s on higher hopes were put on the development of strong artificial intelligence, From then on it was always "Just wait. The next generation of computers will me even more powerful. They will do the job!"

However, we still haven't reach that stage. Just a sidetrack, a thought….. when we succeed in making a mind appear in a computer, does that mean we are forbidden to ever turn it off again? Wouldn't turning it of be murder, the killing of an individual mind?

Well, don't worry, the Chinese Room argument, which I discussed in the previous lecture has shown the weak spot of strong artificial intelligence.

Computer programs are formal, meaning using only syntactical rules to manipulate data, symbols. Our mind, however, has content. Words are not just symbols to the mind. We ascribe meaning to these symbols .

Thus we have to conclude, that computer programs are not sufficient for nor identical with minds.

Strong AI researchers have attempted to program digital computers to understand simple stories. Well known research of that kind dates back to 1977.

For example, the computer might be expected to understand a simple story about eating in a restaurant. The computer is given three kinds of input:
1.The story.

2. Some general information about restaurants and the kinds of things that typically occur there. For example: people eat in restaurants; people order their food from waiters; people are usually required to pay for what they have ordered; and so on.Researchers in strong AI call this information a 'script'.

3. Some questions about the story.
lf the scientists have managed to program the computer properly then, according to strong AI,the computer will not merely answer the questions correctly, it will literally understand the story.

However, even when I would become a super expert in answering questions in Chinese by shuffling symbols according to syntactical rules, I never will understand the questions.

When we get the questions in English we are aware what the questions mean, while I am not aware of the meaning of the chinese symbols I handle.

And here we hit the most difficult issue of our long quest. We must conclude that the Chinese Room setup (shuffling symbols according rules) is insufficient for conscious awareness of the meaning of the questions.

Or stated in a more general way: computation is insufficient for consciousness. There is more to the mind than you can emulate with computer programs.


The Discussion

[13:25] herman Bergson: Thank you...
[13:25] Qwark Allen: ::::::::: * E * X * C * E * L * L * E * N * T * ::::::::::
[13:25] herman Bergson: If you have any questions...you have the floor
[13:25] Carmela Sandalwood: Seems to me you leave out an important aspect of consciousness: it is part of an environment
[13:25] Qwark Allen: at least with the technology of today
[13:26] herman Bergson: that is what they always say Qwark.. ㋡
[13:26] Mick Nerido: Is it theoretically possible for computer brains to be concious?
[13:26] herman Bergson: What do you mean by 'part of an environment, Camela?
[13:26] Qwark Allen: computers are around at few decades
[13:26] Carmela Sandalwood: The Chinese room is not reacting to an environment: it is only suffling symbols
[13:26] Gemma Allen (gemma.cleanslate): one question would we be able to turn it off?? i say no because it would not let us if it becomes conscious
[13:27] Bejiita Imako: yes another way to see it
[13:27] herman Bergson: It is reacting on the questions that come in...that is an environment
[13:27] Qwark Allen: and they will achieve our rate of of processing information in 25 years
[13:27] Carmela Sandalwood: so, suppose I say 'is there a flower in the garden?'...simply shuffling symbols can't answer that even if there is an algorithm
[13:27] Qwark Allen: lets see by then, what will be the question by then
[13:27] Carmela Sandalwood: there has to be sensory input
[13:27] Bejiita Imako: for us to understand what the computer puts out you must convert it to analog signals first
[13:27] Bejiita Imako: and that the computer can never understand it can justtunderstand 1 and 0
[13:28] Bejiita Imako: and 1 and 0 is as meaningless to us as the analog is to a computer
[13:28] Carmela Sandalwood: that's way too simplistic Bejiita
[13:28] herman Bergson: Wait....
[13:28] Gemma Allen (gemma.cleanslate): i do not think artificial intelligence will ever take over but it may come close!
[13:28] Carmela Sandalwood: neurons only 'understand' on and off
[13:28] herman Bergson: One issue at a time....
[13:28] herman Bergson: Furst Qwark....
[13:29] Qwark Allen: in the next decades we`ll have quantic computers, that have 1, 0 and also -1
[13:29] herman Bergson: even in 25 years the computer will not have changed Qwark...it is a syntactical machine....
[13:29] Bejiita Imako: hmm the question is how does our brain store information and do the brain have some sort of A7D D7A converter to put meaning to everything
[13:29] Bejiita Imako: or do we interpret it directly?
[13:29] herman Bergson: when there is a machine that generates consciousness it will not be called a computer....
[13:29] Gemma Allen (gemma.cleanslate): that makes sense
[13:30] Qwark Allen: when they decoded the language brain use for comunicate between neurons, and it`s not that diferent
[13:30] herman Bergson: Then the sensory input question of Camela...
[13:30] Carmela Sandalwood: well, we might call it a robot or artificial intelligence...or Robert
[13:30] Mistyowl Warrhol: But isn't the brain a computer? It processes data.
[13:30] Bejiita Imako: compuetr means calculator and thats what a computer does, averything is just binary math to a computer
[13:30] Qwark Allen: its between charged positively, or negatively
[13:30] Qwark Allen: like 0 and 1
[13:30] herman Bergson: I don't think that it makes much difference to the Chinese room argument...
[13:30] Gemma Allen (gemma.cleanslate): ☆*¨¨*<♥*''*BEJIITA!!! *''*<♥:*¨¨*☆
[13:30] Gemma Allen (gemma.cleanslate): but with emotion and feeling
[13:30] Bejiita Imako: but we don't use mathematical formulas to listen to music
[13:30] herman Bergson: We just built a room with cameras and microphones and so on....
[13:30] Qwark Allen: in a way, when computers have the -1 in their language, maybe they will be ahead of us
[13:31] herman Bergson: the basic principle stays the same....
[13:31] Qwark Allen: cause they will have a state that we cannot have
[13:31] Carmela Sandalwood: Suppose your syntactical rules require that you look outside at times and based on what you see, there rules are different
[13:31] Qwark Allen: the minus one
[13:31] Mick Nerido: We are carbon based computers are sillicon based
[13:31] herman Bergson: yes like we have computer programs that can recognize faces
[13:31] Carmela Sandalwood: I don't see what happens in our brain as much different than what happens in a computer
[13:31] herman Bergson: The basic issue here is.....there is output form the computer...
[13:32] Qwark Allen: i think we are having a narcisist approach
[13:32] Qwark Allen: like we are the only ones
[13:32] herman Bergson: but this output does not imply that the computer has any understanding of what it is doing..
[13:32] Qwark Allen: but, i believe AI will come
[13:32] Carmela Sandalwood: 'understanding' is about reacting appropriately to an environment so that you maximize the chances of survival or meeting other goals
[13:32] Qwark Allen: with some capacities of us
[13:32] Carmela Sandalwood: how do you know I have understanding? how do I know you do?
[13:32] herman Bergson: whether it is recognizing faces, checking spelling, doing calculations...
[13:32] Qwark Allen: movies like blade runner will be like a vision of the future
[13:32] herman Bergson: it is all the same to it
[13:33] CONNIE Eichel whispers: thanks gemma :)
[13:33] herman Bergson: Yes Qwark....we love such fantasies
[13:33] Bejiita Imako: yes its just calculating binary math
[13:33] Carmela Sandalwood: and it is all the same to our neurons
[13:33] Gemma Allen (gemma.cleanslate): :-)
[13:33] Sybyle Perdide: you say, the difference is in understanding?
[13:33] Carmela Sandalwood: meaning doesn't exist at the neural level (or at the level of transistors)
[13:33] herman Bergson: I wouldnt say that Carmela...
[13:34] Bejiita Imako: and also as i said before a cpu can only understand some basic instructions, the compiler have to build the machine co with just these basic commands or the computer wont understand it
[13:34] Bejiita Imako: the x 86 instruction set
[13:34] herman Bergson: slow down Bejiita!! pla
[13:34] herman Bergson: plz
[13:34] Carmela Sandalwood: and a neuron only react to certain stimuli
[13:34] Carmela Sandalwood: so?
[13:34] Bejiita Imako: ah
[13:34] Mistyowl Warrhol: One difference between brain and computer, we calculate by also using emotional reactions learned over time.. using our 5 senses (6 according to some) Can we teach computers to use emotions?
[13:34] herman Bergson: there is one thing we still haven't discussed...and that is consciousness....
[13:35] Carmela Sandalwood: and those emotions are calculated by the brain to react
[13:35] herman Bergson: The awareness of our existence...
[13:35] Carmela Sandalwood: there are algorithms there also
[13:35] Carmela Sandalwood: awareness is an internal representation: data
[13:35] herman Bergson: Well that is one of those issues Misty...
[13:36] herman Bergson: What does that mean Carmela?
[13:36] Carmela Sandalwood: our self-awareness is simply an internal collection of data representing our internal state...it isn't perfect, but it exists and is ultimately binary in character
[13:36] herman Bergson: A second point is the fist person awareness...
[13:37] herman Bergson: Here we have a problem Carmela....
[13:37] herman Bergson: for there are no two mental states of self awareness alike in two different persons...
[13:37] Qwark Allen: ㋡ ˜*•. ˜”*°•.˜”*°• Helloooooo! •°*”˜.•°*”˜ .•*˜ ㋡
[13:37] Qwark Allen: Hey! HAO
[13:37] Carmela Sandalwood: of course not...the systems are different
[13:37] Carmela Sandalwood: so?
[13:37] herman Bergson: What it is like to be me...is a special mental stat for me....
[13:38] herman Bergson: nobody in the whole world has that....
[13:38] Carmela Sandalwood: I'm not so sure that the mental state of 'being you' will be forever limited to you
[13:38] Carmela Sandalwood: it may be possible to transfer that data in the future
[13:38] herman Bergson: so my mental state of "what it is like to be me?" has a special property no other mental state in the world has
[13:38] herman Bergson: except my own of course....
[13:39] Gemma Allen (gemma.cleanslate): :-)
[13:39] Carmela Sandalwood: the question is whether the data can be transfered and used by the recipient
[13:39] Gemma Allen (gemma.cleanslate): that horse is hungry
[13:39] herman Bergson: We have to answer the question how to understand this first person property
[[13:40] herman Bergson: well....we almost have seen all attempts to understand the mind....
[13:40] herman Bergson: soon we'll get to the issue of what makes the mind: consciousness
[13:41] Mistyowl Warrhol: Ok, if we each have our own thoughts, unique to us, where is that stored. If it is in tissue, does any of that transfer in situations of organ donations?
[13:41] herman Bergson: My conclusion of today is that I wouldn't bet on consciousness in machines
[13:41] Carmela Sandalwood: it might if you transferred brains, but not likely otherwise
[13:41] herman Bergson: That is a fascinating question Misty....
[13:41] Bejiita Imako: machines work way too different from us
[13:42] herman Bergson: Because the greek thought that the mind was in the heart....
[13:42] Hao Zaytsev: hehe
[13:42] herman Bergson: the Egyptians also didn't have a high esteem of the brain...they threw it away when mummifying a pharaoh…
[13:42] Mistyowl Warrhol: There are some cases in which ppl have seem to remember small data from someone that donated.. But that is a topic for another time :-)
[13:42] Carmela Sandalwood: so they were wrong...it happens
[13:43] Mick Nerido: There have be recent mouse brain experiments that shows memories could be transferred
[13:43] herman Bergson: Aristotle thought that the brain was an organ to cool the blood
[13:43] herman Bergson: Well....
[13:43] Carmela Sandalwood: Aristotle was also wrong about physics
[13:43] Mistyowl Warrhol: Some think the brain is something to play with :-)
[13:43] herman Bergson: We have the believe that the mind is (in) the brain....
[13:43] Gemma Allen (gemma.cleanslate): ♥ LOL ♥
[13:43] herman Bergson: In a way...as if the body doesn't play a part in it at all
[13:44] Carmela Sandalwood: yes, that is also simplistic Herman....the body is required for the sensory input at least
[13:44] herman Bergson: Some people believe that donor organs also contain something of the donating person....not just tissue
[13:45] Gemma Allen (gemma.cleanslate): i do'nt
[13:45] Mick Nerido: Frankinstein
[13:45] Sybyle Perdide: in a practical sense its true
[13:45] Carmela Sandalwood: I'd have to see the data...but that also doesn't make it non-mechanical
[13:45] Carmela Sandalwood: more specifically chemical
[13:45] Gemma Allen (gemma.cleanslate): we will never really agree on this :-)
[13:46] Mistyowl Warrhol: I think the biggest difference between brain and computers.. the brain is being bathe by chemicals from around the body which in turn effects how the brain responds. Whereas, the computer.. just uses data that was inputed, even though it can reassemble that data by design.
[13:46] herman Bergson: That isn't necessary Gemma....
[13:46] herman Bergson: as long as we keep thinking about it and questioning it..
[13:46] Gemma Allen (gemma.cleanslate): Yes-ah!
[13:46] Hao Zaytsev: aliens
[13:46] Carmela Sandalwood: MistyOwl: that may be, but even that can be represented by appropriate computing internals
[13:46] herman Bergson: Nice gus Hao.. ㋡
[13:46] Gemma Allen (gemma.cleanslate): herman it is time to put the christmas trees in the yard there
[13:47] Hao Zaytsev: hehe
[13:47] Gemma Allen (gemma.cleanslate): and snow
[13:47] Bejiita Imako: ah yes
[13:47] Bejiita Imako: really strange no snow here yet
[13:47] Bejiita Imako: and 7 deg warm outside now
[13:47] herman Bergson: Oh my ...so true Gemma.....
[13:47] Gemma Allen (gemma.cleanslate): 3 weeks till christmas
[13:47] herman Bergson: SO thank you all again for your interest and participation today
[13:47] Gemma Allen (gemma.cleanslate): ♥ Thank Youuuuuuuuuu!! ♥
[13:47] herman Bergson: I have to dismiss the class
[13:47] Bejiita Imako: really interesting
[13:47] Hao Zaytsev: why is a horse on the floor?
[13:47] Bejiita Imako: ㋡
[13:48] Gemma Allen (gemma.cleanslate): no clue
[13:48] herman Bergson: Have to put up my Xmas tree to keep Gemma happy
[13:48] Gemma Allen (gemma.cleanslate): trees
[13:48] Bejiita Imako: ㋡
[13:48] Qwark Allen: .-)))
[13:48] Sybyle Perdide: thank you Herman..it was great
[13:48] Mistyowl Warrhol: Carmela, the chemical of the body are altered by changes in our enviroment and our emotional response to that. Cant we reproduce that mechically?
[13:48] Hao Zaytsev: damn
[13:48] Hao Zaytsev: nice trip
[13:48] Bejiita Imako: tis was really great and interesting
[13:48] Gemma Allen (gemma.cleanslate): Bye, Bye ㋡
[13:48] Gemma Allen (gemma.cleanslate): fir biw
[13:48] :: Beertje :: (beertje.beaumont): it was very interesting Herman, thank you
[13:48] CONNIE Eichel: bye gemma
[13:48] Carmela Sandalwood: Yes, among other ways, but changing the electrical or magnetic environment
[13:48] Bejiita Imako: well must head back now but cu soon again
[13:49] Bejiita Imako: ㋡
[13:49] Qwark Allen: ¸¸.☆´ ¯¨☆.¸¸`☆** **☆´ ¸¸.☆¨¯`☆ H E R MA N ☆´ ¯¨☆.¸¸`☆** **☆´ ¸¸.☆¨¯`
[13:49] Qwark Allen: ty
[13:49] Hao Zaytsev: im excited
[13:49] Qwark Allen: was very good as usual
[13:49] Bejiita Imako: YAY! (yay!)
[13:49] Carmela Sandalwood: and that could even be in a feedback loop controlled by the CPU
[13:49] CONNIE Eichel: well, bye bye all, great class professor :)
[13:49] Carmela Sandalwood: TY Herman
[13:49] CONNIE Eichel winks
[13:49] herman Bergson whispers: thank you CONNIE
[13:50] CONNIE Eichel: :)
[13:50] Mistyowl Warrhol: Very interesting and fun. I have really enjoyed this and meeting everyone :-)
[13:50] herman Bergson: Nice, Misty ㋡
[13:50] herman Bergson: I do this now for more than 5 years
[13:50] Carmela Sandalwood: It is *very* interesting...thank you for doing it
[13:51] Carmela Sandalwood: while I may disagree, it is fun to think about it all
[13:51] herman Bergson: It would be a very dull class when everyone agreed with everyone
[13:51] Carmela Sandalwood: quite true
[13:52] Carmela Sandalwood: I think the problem is ultimately what 'understand' means...do we actually have anything but an operational definition?
[13:52] herman Bergson: The main goal of philosophy is not to get the right answers...
[13:52] herman Bergson: as you give an example Carmela...philosophy is about asking the right questions
[13:53] Carmela Sandalwood: *smiles*
[13:53] Mistyowl Warrhol: Or asking wrong questions, to get a different view?
[13:53] Carmela Sandalwood: and exploring the possible answers
[13:53] Mistyowl Warrhol: :-)
[13:53] herman Bergson: right...
[13:54] herman Bergson: When you take the Chinese Room argument for instance....
[13:54] Mistyowl Warrhol: Ok, I did try to be good and not overload everyone circuits LOL
[13:54] Carmela Sandalwood: it seems that often philosophy is about figuring out what the 'correct' definitions actually are
[13:54] herman Bergson: Look it up in the Internet Encyclopedia of pHilosophy....
[13:55] herman Bergson: Searle himself mentioned a number of more than 200 counter arguments to it..
[13:55] herman Bergson: Yet I think it is a pretty convincing argument that shows that computers can not be identical with our brain...
[13:56] herman Bergson: No semantics , no awareness, no consciousness....
[13:56] Carmela Sandalwood: and I think it misses some crucial aspects of both how computers work and how understanding happens
[13:56] Carmela Sandalwood: but semantics can be syntactical
[13:56] herman Bergson: Despite all science fiction computer minds
[13:56] Carmela Sandalwood: in an environment
[13:57] herman Bergson: no Carmela....
[13:57] Mistyowl Warrhol: But it is a fun idea to think about.. Computers gaining knowledge on their own.
[13:57] Carmela Sandalwood: My guess is that AI will happen when we program robots to change internal states appropriately
[13:57] herman Bergson: No computer will ever "know" the truth value of any complex symbol
[13:57] Carmela Sandalwood: why not? and why do we?
[13:58] herman Bergson: That is the big question indeed
[13:58] herman Bergson: Let's postpone the answer to further lectures ㋡
[13:58] Sybyle Perdide: oh
[13:58] Sybyle Perdide: the suspense was growing
[13:59] Carmela Sandalwood: *smiles*...sounds good
[13:59] Sybyle Perdide: and now that cliffhanger
[13:59] herman Bergson: Then I have time to set up my Chrismas tree and snow here ^_^
[13:59] Sybyle Perdide: laughs
[13:59] Sybyle Perdide: thats an argument
[13:59] Carmela Sandalwood: hopefully my schedule will be nice and I'll be able to attend
[13:59] herman Bergson: You are always welcome Carmela ㋡
[14:00] Carmela Sandalwood: as long as RL complies ;)
[14:00] Carmela Sandalwood: thank you for the discussion
[14:00] Sybyle Perdide: I wish you a nice evening
[14:00] Mistyowl Warrhol: Yes, the other part of the equation. RL !!!
[14:00] Sybyle Perdide: see you on tuesday :)
[14:00] herman Bergson: My pleasure Carmela ㋡

Enhanced by Zemanta

Thursday, November 24, 2011

364; The computational Mind

Previous lecture I introduced you to the syntactic and semantic properties of symbols. Reason was, computers work with symbols, basic symbols and complex symbols.

In the sentence "Fido is the name of my dog" you could regard "Fido" as a basic symbol and the whole sentence as a complex symbol.

Syntactic properties are derived from the symbol itself. Properties like "It has 4 characters in a specific order", "it is black on white" and so on.

To understand what "Fido" means, a semantic property you need more than the symbol itself. At least you need me pointing at my dog saying "Look, that is Fido."

Why so much focus on this distinction between syntax and semantics? That is because some people see an analogy between brain and computer or visa versa.

Philosophers of mind who endorse the computational theory of mind, have come to the view that the mind is a computer and that thinking is symbol manipulation.

Now the question "What is a computer?" A computer is a 'syntactic engine', a device which organizes and manipulates symbols on the basis of their syntactic properties

Let me give you an example. When I use my word processor to write a story on my dog, I could use the 'Find' function and type 'Find Fido'.

What Fido is, is unknown to my computer, not even important for performing the 'find'. It is just a comparing of symbols on the syntactic level.

Yet a computer seems to respect semantic properties of complex symbols too. One semantic property is 'TRUE'. A complex symbol, a statement, has a truth-value.

Take this argument:
A. All avatars are made of pixels
B. herman is an avatar
Then the computer comes with the complex symbol:
C. herman is made of pixels

You would say…look, a computer also can handle semantical properties of complex symbolism because that conclusion, typed in by nobody but generated by the computer(program), is TRUE!

Thus a computer looks to be able to rational reasoning. It 'sees' causal relations between thoughts like A and B. However, this is not the case. We have to provide the semantics.

The computer has only a symbol manipulation rule which says: if A is marked TRUE and B is marked TRUE then you can mark C as TRUE, or a rule like if all A have property X and B is an A then B has property X.

WE have to give the computer the semantics by telling the program : A = TRUE, B = TRUE. Then it applies one of the syntax rules I just stated.

And that is what the computational theory of mind holds, that all our thinking is just like a computer does, syntactic.

Putting all this together we can say that, according to the computational theory of mind ,thoughts are complex symbols with syntactic and semantic properties.

Thinking, the manipulation of thoughts, is achieved by processors which, whilst sensitive only to the syntactic properties of the thoughts involved, nevertheless respect their semantic properties. In other words,thinking is computation.

Do you know Eliza or Elbot? Well, then read this short conversation with Elbot. You can talk for hours with him.

http://www.elbot.com/

H: You are a tin can Elbot
E: Was that a compliment or an insult?
H: Neither
E: Are you trying to make fun of my artificial intelligence?
If it was neither a compliment nor an insult, tell me
what it WAS!
H: It was an objective observation, Elbot
E: Keep making observations.
One of these days they might add up to something.
H: Yes, they add up to the observation that you are a tin
can.
E: The brilliant robot author Joseph Canrod said once:
The great enemy of perfection is thought.' Think about that!
H: Joseph Canrod did not say that.
E: I'm not sure I could explain that in language
that humans could understand.
H: I can read, Elbot
E: That's nothing new! Even I once read bar codes.
H: I repeat: Joseph Canrod did not say that.
E: I prefer not to repeat myself. In this respect
I differ greatly from most human beings.
H: You really are an empty tin can.
E: Still, I would never make the sweeping generalization
of all tins being empty, would you?
H: Right.
E: Great! Agreeing with me is the first step in your evolution
into a higher being. Please accept my congratulations.

Also have a chat with Eliza:
http://psych.fullerton.edu/mbirnbaum/psych101/Eliza.htm

After that, in a next lecture we'll discuss whether computation in a computer generates a mind or not. Did I chat with a mind, named Elbot?


The Discussion

[13:28] herman Bergson: Thank you...
[13:28] herman Bergson: and Welcome Rodney :-)
[13:29] Farv Hallison: thank you professor
[13:29] Rodney Handrick: hi Herman
[13:29] Sybyle Perdide: hard stuff again Herman.. thank you
[13:29] Farv Hallison: hello Rodney, are you a tin can?
[13:29] herman Bergson: Well Sybyle...try Eliza..she wont be hard on you at all ㋡
[13:29] Rodney Handrick: hi
[13:30] Rodney Handrick: No Farv...I'm not
[13:31] Farv Hallison: you have a pixelated outline, Rodney. I am wondering if you are a hologram.
[13:31] Rodney Handrick: No
[13:31] Farv Hallison: hello Ayi
[13:32] herman Bergson: No..Rodney is one of the die hearts of th Philosophy Class for years now
[13:32] herman Bergson: diehard is it , isnt it?
[13:32] Ayi Coeur: hello all
[[13:32] Rodney Handrick: This is true...:-)
[13:32] herman Bergson: But many questions about this lecture?
[13:33] herman Bergson: Understandable...
[13:33] herman Bergson: the message is only that computers are syntactic engines...
[13:33] Mick Nerido: Talking to Elza was like talking to a therapist lol
[13:33] herman Bergson: It means...they don't deal with content, but only with the shape of symbols
[13:34] herman Bergson: Yes Mick...she is good
[13:34] Mick Nerido: It turned my questions back on me.
[13:34] herman Bergson: I had lengthy conversations with her which made 100% sense
[13:35] Sybyle Perdide: lucky one
[13:35] herman Bergson: yes of course Mick...that is the Rogerian approach
[13:35] Sybyle Perdide: she tried to escape the conversation twice
[13:35] Farv Hallison: Did she introduce any new words into the conversation?
[13:35] Rodney Handrick: I experienced the sam e thing Mike
[13:36] herman Bergson: She can Farv
[13:36] Mick Nerido: She is not a mind but a computer program designed to resemble our minds...
[13:36] herman Bergson: Oh yes Mick...
[13:36] Ayi Coeur: who is Elza?
[13:36] herman Bergson: But then you come to Artificial Intelligence..
[13:37] herman Bergson: Eliza is a computer program Ayi...
[13:37] herman Bergson: a Rogerian psycho therapist
[13:37] Ayi Coeur: ah ok:)
[13:37] Farv Hallison: as a computer program, she could be programmed to remeber previous conversations and accumulated a big dictionary of the meaning of words.
[13:37] Mick Nerido: Cheap therapy lol
[13:38] herman Bergson: Yes Farv...that is what Elbot seems to do
[13:38] Farv Hallison: a cheap program would just spew back sex words
[13:39] herman Bergson: Eliza doesn't like bad words Farv
[13:39] Farv Hallison: so Eliza isn't cheap/.
[13:39] herman Bergson: Oh no...
[13:39] herman Bergson: It is a scientific achievement
[13:39] Lizzy Pleides: in sl we have a parrot who can talk like Eliza
[13:39] herman Bergson: The name connected to it is Weizenbaum
[13:40] Farv Hallison: I thought she was Eize Dolittle.
[13:40] herman Bergson: Joseph Weizenbaum
[13:40] Rodney Handrick: I wonder how many servers are used to run Eliza
[13:41] Rodney Handrick: And lines of code?
[13:41] herman Bergson: "Computer power and human reason"
[13:42] herman Bergson: It is not a big program Rodney...
[13:42] herman Bergson: you can find a open source Java version on the net
[13:42] Rodney Handrick: really...java...I have to look it up
[13:42] Ayi Coeur: i think it's worth trying what she has to say:)
[13:42] Sybyle Perdide: but Eliza never comes to a new level of knowledge, didn't she?
[13:42] Farv Hallison: I think I saw a LSL version.
[13:43] Rodney Handrick: what is the program code called
[13:43] herman Bergson: yes...I must have it somewhere Rodney
[13:43] Farv Hallison: I saw eliza.bas
[13:43] herman Bergson: Weizenbaum wrote it in BASIC
[13:43] herman Bergson: Where Farv???
[13:43] Ayi Coeur: auw..what a lot of work
[13:44] Ayi Coeur: wb Mick
[13:44] Rodney Handrick: I'm currently taking a Stanford U course in artificial intel
[13:44] herman Bergson: Hi Mick
[13:44] herman Bergson: cool Rodney
[13:44] Mick Nerido: hit wrong button lol
[13:44] herman Bergson: lol
[13:44] Farv Hallison: Is Eliza teaching the course?
[13:45] herman Bergson: means you might be in time for the next lectures
[13:45] Rodney Handrick: lol
[13:45] Farv Hallison: I am the wrong button.
[13:45] herman Bergson: No Farv my name is herman
[13:45] Farv Hallison: metaphorically.
[13:45] Rodney Handrick: Is this it?
[13:45] Rodney Handrick: http://smallbasic.sourceforge.net/?q=node/56
[13:46] herman Bergson: I'd love to have a source code of Eliza and translate it to LSL
[13:47] herman Bergson: THANK YOU Rodney
[13:47] herman Bergson: I might make a philosophical Eliza :-)
[13:48] Ayi Coeur: :) in basic?
[13:48] Lizzy Pleides: we prefer you Herman!
[13:48] herman Bergson: no..LSL...so that she can work in SL
[13:48] herman Bergson: Don't worry Lizzy...I am still in charge here
[13:48] Sybyle Perdide: giggles
[13:48] Ayi Coeur: and he stays that way,,,i guess..
[13:49] herman Bergson: well...I guess we are done...
[13:49] Ayi Coeur: missed the whole lecture..thanks to sl
[13:49] herman Bergson: so thank you all and today especially Rodney for the URL
[13:49] Sybyle Perdide: it was brilliant, Avi ;)
[13:49] herman Bergson: class dismissed
[13:49] Rodney Handrick: sure...not a problem
[13:49] Lizzy Pleides: yes very good!

Enhanced by Zemanta

Wednesday, November 16, 2011

362: Frowning at functionalism

Sometimes it is possible to show that one theory (the reduced theory) can be derived from another (the reducing theory).

In that case an inter-theoretic reduction has been achieved. Notice that the emphasis here is on theories. 'Inter-theoretic' means 'between theories'.

The example of inter-theoretic reduction standardly given is the derivation of classical thermodynamics from the kinetic theory of gases.

The former theory describes the behavior of gases in terms of their temperature, pressure and volume. The later describes the behavior of gases in terms of the kinetic energy and impacts of gas molecules.

The derivation is achieved with the help of 'bridge-laws' which identify the terms of one theory with those of another. For example,the pressure of a gas is identified with the mean kinetic energy of its gas molecules.

For the moment this is how I look at the identity between mental states and brain states. It is our brain / mind that generates knowledge about reality.

All this knowledge is in the form of (tested) theories. As the example shows theories can be reduced to more basic theories eventually, e.g. psychological theories to neurobiological theories

This is not a law of physics but an observed fact. A fact of which we don't know whether it is universally true about all our theories about reality, but it is an indication about the structure of our knowledge about reality.

A completely different subject, but just as a hint to think about: the structure of knowledge.

And then there is functionalism, promising to solve problems to which the identity theory had no answer. The view is: Don't ask what stuff something is made of, just look what it does.

If some entity does what I call feeling pain, then the sentient being has the mental state of pain, to put it in a straight forward way. This implies that anything can have mental states.

I still don't know why exactly, but I don't like functionalism as an answer, although it is said that almost all physicalists (materialists) are functionalists. Probably I am not (yet:-).

Don't ask me for rock solid arguments at this moment. Philosophy is a creative adventure, not just plain and simple logic and ratio.

And then you run into the question: Who "invented" functionalism? It begins in the 1950s and 1960s and yes, alongside the development of computers.

The initial inspiration for functionalism comes from the useful analogy of minds with computing machines. Hilary Putnam was certainly not the first to notice that this comparison could be theoretically fruitful.

Hilary Whitehall Putnam (born July 31, 1926) is an American philosopher, mathematician and computer scientist, who has been a central figure in analytic philosophy since the 1960s, especially in philosophy of mind, philosophy of language, philosophy of mathematics, and philosophy of science, as Wikipedia tells us.

His idea was to model functions using the contemporary idea of computing machines and programs, where the program of the machine fixes how it mediates between its inputs and standing states, on one hand, and outputs and other standing states, on the other.

Modern computers demonstrate that quite complex processes can be implemented in finite devices working by basic mechanical principles.

If minds are functional devices of this sort, then one can begin to understand how physical human bodies can produce the tremendous variety of actions and reactions that are associated with our full, rich mental lives.

The best theory, Putnam hypothesized, is that mental states are functional states, that the mind is of a functional kind.

So, to put functionalism to the test our next question should be….. can computers have mental states?


The Discussion

[13:19] herman Bergson: Thank you ㋡
[13:19] Chantal (nymf.hathaway) is Offline
[13:20] Qwark Allen: what is "bothering" you about functionalism?
[13:20] Qwark Allen: ::::::::: * E * X * C * E * L * L * E * N * T * ::::::::::
[13:20] Lizzy Pleides: brilliant!
[13:20] Sybyle Perdide: great
[13:20] herman Bergson: Good question Qwark....
[13:20] Ladyy Haven (ladyy.haven) is Offline
[13:20] herman Bergson: the thing is....it is a metaphysical approach....
[13:21] Ladyy Haven (ladyy.haven) is Online
[13:21] Elle (ellenilli.lavendel) is Offline
[13:21] herman Bergson: it says ...a combination of in put and output and some side effect...that is for instans 'pain'
[13:21] Wonny (wonda.masala) is Offline
[13:21] noego is Online
[13:21] herman Bergson: let me put it in other words...
[13:22] herman Bergson: a diamond is a physical thing with properties....
[13:22] Qwark Allen: we doubt a artificial intelligence can feel pain, but, for sure some other mental states can occur
[13:22] herman Bergson: it is extremely hard, can cut glass , can glitter, etc....
[13:22] The Silent one (odie.rhosar) is Online
[13:23] herman Bergson: but functionalism looks at things as processes in causal relations
[13:23] Qwark Allen: we have pain cause we have sensors for it, cause of evolution
[13:23] Sybyle Perdide: I think, it depends on your definition of pain, Qwark ..some machines have programs to recreate themselves if there are errors
[13:23] Qwark Allen: probably no need to apply pain sensors to a AI
[13:24] herman Bergson: yes but there you relate pain to sensors, while functionalism defines pain as amental state as a function....
[13:24] herman Bergson: Well...maybe it is a matter of meaning....
[13:24] herman Bergson: what does 'pain' mean....
[13:24] Mick Nerido: Pain is a function of our primitive brain...
[13:24] Qwark Allen: danger, something is messing with your physical integrity
[13:25] herman Bergson: there the functionalist says....a relation between input and output and causal realtions with other mental states
[13:25] herman Bergson: I still think....the reference of pain is a bodily brain process...
[13:26] herman Bergson: the word pain is another word for certain neural processes....
[13:26] herman Bergson: it is about meaning and reference
[13:26] Mick Nerido: Pain is an overload of sensation
[13:26] Lizzy Pleides: pain is not only a physical state
[13:26] Qwark Allen: its our sensors that alert us for something
[13:26] Qwark Allen: pain is just one
[13:26] herman Bergson: There we go Lizzy....
[13:26] Velvet (velvet.braham): I like Mick's definition
[13:26] Farv Hallison: the brain has many processes. There is no distinct before and after for any single process.
[13:26] Qwark Allen: fun thing, to think about, is the brain short circuits
[13:27] Qwark Allen: cause some sensors are mixed
[13:27] herman Bergson: True Farv....is is like streaming water....
[13:27] Qwark Allen: like cold and menthol, and hot and spicy
[13:27] Quiet-Water (pearl.moonlight) is Online
[13:27] Mick Nerido: to much of a good sensation can hurt
[13:27] Farv Hallison: you can never step twice into the same river.
[13:27] Farv Hallison: Heraclitus
[13:27] Bibbe Oh: reptile brain
[13:27] Qwark Allen: when you eat a peper, the body tells you its hot, but in reality its not
[13:27] herman Bergson: you know your classic Farv ^_^
[13:28] Velvet (velvet.braham): wow Farv
[13:28] Qwark Allen: the same goes for menthol
[13:28] Lizzy Pleides: but menthol and pepper are different despite
[13:28] Qwark Allen: so some mental states of us are kind confused
[13:28] herman Bergson: I know I am a kind of classic in my ideas....
[13:29] Qwark Allen: yes, but menthol and cold receptors are the same
[13:29] Qwark Allen: hot and spicy also
[13:29] herman Bergson: Funny thing is....I am still cherishing the ideas of my thesis of 1977 :-)
[13:29] herman Bergson: Even though we have functionalism now…which is so much applauded...
[13:29] Lizzy Pleides: and why do they taste different?
[13:30] :: Beertje :: (beertje.beaumont): why are you cherishing them Herman?
[13:30] Farv Hallison: Can we be conscious of more than one thing at a time?
[13:30] herman Bergson: Well....
[13:30] Qwark Allen: ehehhe mentol have one receptor, spice another
[13:30] herman Bergson: Like everything....also philosophy is a matter of trends.....
[13:30] herman Bergson: especially in academic circles...
[13:30] Qwark Allen: the short circuit is between hot/spice and menthol/cold
[13:31] Sybyle Perdide: a dedicated follower of fashions .. giggles
[13:31] herman Bergson: Take for instance the China Brain argument against functionalism....
[13:31] Lizzy Pleides: so not the same receptors as you said b4
[13:31] herman Bergson: I won't trouble you with that...
[13:32] herman Bergson: But when you are a scholar at a university...you have to publish...
[13:32] Farv Hallison: How does the brain understand anything?
[13:32] Qwark Allen: lizzy, read it, lol, the receptor for cold is the same for mentol, and the one for spice is the same one for hot
[13:32] herman Bergson: like everyone does....so the thought experiment of the China Brain (you can ggole it) has to be discussed
[13:33] herman Bergson: "How does the brain understand anything?"
[13:33] herman Bergson: That is the whole point Farv...is it the brain or the mind ?
[13:33] herman Bergson: and is the mind identical to the brain, just another word for the same thing?
[13:33] Farv Hallison: I'm thinking of the Chinese Room.
[13:33] Sybyle Perdide: so the mind is metaphysic at its best? if existing?
[13:34] herman Bergson: Ahhhh...brilliant argument of John Searle....
[13:34] Amera Pomilio is Online
[13:34] herman Bergson: We try to find that out Sybyle ^_^
[13:34] Farv Hallison: Room. we see and translate chinese to English without knowing the meaning.
[13:34] Sybyle Perdide: so I got you
[13:34] Sybyle Perdide: : )
[13:34] herman Bergson: We'll discuss the Chinese Room soon Farv....
[13:35] herman Bergson: You got me Sybyle? :-)
[13:35] Mick Nerido: Brain is the physical machine the mind is the function effect?
[13:35] herman Bergson: You want me???? ^_^
[13:35] Sybyle Perdide: one of your argumentations..I understood I mean
[13:35] herman Bergson: smiles
[13:35] herman Bergson: ok Sybyle
[13:36] Sybyle Perdide: and sure ..I want you
[13:36] herman Bergson: Yes Mick....maybe that is a way to put it...
[13:36] herman Bergson: grins
[13:36] Paula Dix is Offline
[13:36] Amera Pomilio is Offline
[13:36] Farv Hallison: Is the Mind a thing or a process?
[13:37] herman Bergson: I love the dozens of loose ends we have to deal with here....
[13:37] herman Bergson: That is the quintessential question Farv
[13:37] herman Bergson: is the mind something or a function....
[13:37] Farv Hallison: Is Consciousness a thing or a process?
[13:37] Sybyle Perdide: but.. if the effect follows physical processes, isn't it the effect of them and so also a physical effect
[13:38] herman Bergson: Let me put it this way Farv....
[13:38] Bibbe Oh: Brain is the machine and mind the hard drive?
[13:38] Lizzy Pleides: both will be needed for it i guess
[13:38] herman Bergson: ...the best explanation of consciousness/ the mind I have heard sofar is from John Searle
[13:39] herman Bergson: He says...take a glass of water....the water is liquid....
[13:39] herman Bergson: yet you can not separate liquidity from the water...
[13:39] herman Bergson: neither can you find an H2O molecule that is liquid...
[13:39] herman Bergson: yet
[13:40] herman Bergson: put a bunch of H2O molecules together and you got liquidity
[13:40] herman Bergson: so put a bunch of neurons together and you get under certain circumstances consciousness
[13:40] Farv Hallison: Would you say liquid water in an emergent property of H2o molecues?
[13:41] herman Bergson: here we come up with the concept of 'emergence'
[13:41] herman Bergson: we have't reached that subject yet.....but it is a tempting idea...
[13:41] herman Bergson: But one must be careful with it in terms of ontology...
[13:42] herman Bergson: because...when it is an emergent property of H2O molecules...then still it has to be something physical..
[13:42] Farv Hallison: Is ther a difference between Ontology and Metaphysics?
[13:42] herman Bergson: otherwise we are back to dualism again
[13:43] herman Bergson: yes....
[13:43] herman Bergson: ontology tries to explain what IS
[13:43] herman Bergson: Metaphysics tries to explain how what it is structured....
[13:44] herman Bergson: Like functionalism tells us how mental states are structures, but doesnt say anything about what IS.....what it is that makes the function possible....can be anything theoretically
[13:45] herman Bergson: for instance....Physics explains physical processes...
[13:45] herman Bergson: Metaphysics would tell us that these processes have a goal...
[13:46] herman Bergson: ontology would tell us that is is only matter that is at the basis of all
[13:46] herman Bergson: while Descartes for instance would say NO there is also a mental substance
[13:47] Farv Hallison: that is materialism, that matter is the basis. We could have a theory where pure Mind is the basis.
[13:48] Mick Nerido: Lots to think about, thanks Professor
[13:48] Sofie DuCasse is Offline
[13:48] herman Bergson: Yes Farv, that also can be an option....Chalmers postulates for instance a kind of panpsychism...
[13:48] herman Bergson: You are so right Mick
[13:48] herman Bergson: Sometimes I don't know where to begin and where to end...
[13:49] Mot Mann is Offline
[13:49] herman Bergson: that is why I chose fro my materialist starting point....
[13:49] herman Bergson: whether it will hold or not, we'll see in future lectures... ㋡
[13:50] herman Bergson: So ...thank you all for this good discussion today...
[13:50] Guestboook van tipjar stand: Velvet Braham donated L$50. Thank you very much, it is much appreciated!
[13:50] Qwark Allen: ::::::::: * E * X * C * E * L * L * E * N * T * ::::::::::
[13:50] herman Bergson: See you next time again...
[13:50] Qwark Allen: ¸¸.☆´ ¯¨☆.¸¸`☆** **☆´ ¸¸.☆¨¯`☆ H E R MA N ☆´ ¯¨☆.¸¸`☆** **☆´ ¸¸.☆¨¯`
[13:50] Qwark Allen: ty
[13:50] herman Bergson: Class dismissed
[[13:50] Bibbe Oh: thank you very much
[13:50] Lizzy Pleides: Thank you Herman!
[13:50] Sybyle Perdide: it was really great herman
[13:50] Sybyle Perdide: thank you
[13:51] herman Bergson: My pleasure...
[13:51] Qwark Allen: really good lecture
[13:51] bergfrau Apfelbaum: ty herman :-) and class
[13:51] :: Beertje :: (beertje.beaumont): Thank you Herman..it was very interesting as always
[13:51] herman Bergson: thank you Beertje
[13:51] bergfrau Apfelbaum: byebye class :-) see u soon!
[13:51] Velvet (velvet.braham): Thank you, Professor!
[13:51] :: Beertje :: (beertje.beaumont): have a goodnight all:)
[13:52] Farv Hallison: thank you professor Bergson.
[13:52] Lizzy Pleides: nite Beertje!
[13:52] bergfrau Apfelbaum: bussi herman :-)
[13:52] herman Bergson: Thank you Farv for you good remarks!
[13:52] Farv Hallison: bye Lizzy
[13:52] Sybyle Perdide: bye beertje
[13:52] Lizzy Pleides: bye farv
[13:52] Flying Lips Vector interactor: Lizzy Pleides bids Farv Hallison farewell!
[13:52] Farv Hallison: bye Bibbe
[13:53] herman Bergson: Bye Bibbe

Enhanced by Zemanta

Tuesday, November 15, 2011

361: Functionalism

Theories of the mind PRIOR to functionalism have been concerned both with (1) what there is, and (2) what gives each type of mental state its own identity,

for example what pains have in common in virtue of which they are pains. Stretching these terms a bit, we might say that (1) is a matter of ontology and (2) of metaphysics.

Here are the ontological claims: Dualism told us that there are both mental and physical substances,

whereas behaviorism and physicalism are monistic, claiming that there are only physical substances.

Here are the metaphysical claims: Behaviorism tells us that what pains (for example) have in common in virtue of which they are pains is something behavioral;

dualism gave a nonphysical answer to this question, and physicalism gives a physical answer to this question, referring to, for instance, the firing of c-fibers.

Turning now to functionalism, it answers the metaphysical question without answering the ontological question.

Functionalism tells us that what pains have in common, what makes them pains, is their function;

but functionalism does not tell us whether the beings that have pains have any nonphysical parts.

In the beginning I have said, that I wish to investigate the feasibility of a materialist theory of mind, which also may be called in more contemporary wordings, a physicalist theory of mind.

The catching title of a book, which has been a bestseller in the netherlands, "Wij zijn ons brein" (We are our brain) may sound nice, but becomes questionable due to functionalism.

Let me explain, what this means. Water is type identical to H2O, which means that any drop of matter with certain characteristic is H2O

and the discovery that water is H2O facilitated the (ontological) reduction of water to H2O. Why has water been reduced to H2O rather than vice versa?

The general idea is that chemistry has the resources to deal with a much wider range of phenomena than does a science that is restricted to studying water. Consequently, chemistry is held to be the more 'basic' or 'fundamental' science.

But suppose, that pain is identical with firing c-fibers. Whenever you say, that is pain, you can point at the firing of c-fibers.

Then we have a problem, because when "pain" and "firing c-fiber" are identical, then every organism that has no c-fibers can not feel pain, because that mental state is identical with firing c-fibers.

This is a serious problem for what we call the type identity theory. Now suppose that your dog has no c-fibers, but d-fibers.

When you accidentally step on its tail, the poor fellow barks "woooowau..", jumps up and tries to get away, you see on a scanner certain nerves and parts of its brain become active.

These nerves are different from our c-fibers, but the state poor doggy is in looks certainly identical to our behavior when someone steps on our toe. Difficult to deny, that your dog feels pain.

Here functionalism seems to offer a real solution. "We are our brain" may be true, but that doesn't cancel the possibility, that for instance, an alien, not carbon based as we are, but silicon based can have a brain too.

Like I said about carburetors in the previous lecture: they can be made of all kinds of materials, have all kinds of different shapes, they all do the same job: mix air and patrol.

Thus, as functionalism states: if A, B and C do the same job, have the same functional role in an organism or system, then it is ontologically not important what A, B and C are made of.

This has serious consequences, for it can mean that any system, that does the same as our brain-based mental states, would have a mind too and consciousness.

I guess you already see it coming….. a computer, when properly programmed….. would there ever be a HAL like in "2001: A Space Odyssee"? We'll see.


The Discussion

[13:22] herman Bergson: Thank you.... ㋡
[13:23] Gemma Allen (gemma.cleanslate): too much to digest!
[13:23] Qwark Allen: ::::::::: * E * X * C * E * L * L * E * N * T * ::::::::::
[13:23] herman Bergson: The floor is yours....
[13:23] Qwark Allen: Just what do you think you,r doing DAAAAVE ::: -_+
[13:23] herman Bergson: oh dear...Gemma!!!!
[13:23] Qwark Allen: *•.¸ I'm sorry Dave... I'm afraid I can't do that... ツ ¸.•*
[13:23] Gemma Allen (gemma.cleanslate): well the dog and I function the same
[13:23] Bejiita Imako: hmm the hal thing im a bit skeptical about because a computer may be analogous to a sort of mechanical brain
[13:23] Bejiita Imako: but our brains involve chemical signals
[13:24] Mick Nerido: It make sense that a brain system other than ours could be called conscious…
[13:24] Bejiita Imako: but a computer instead does not at all the same
[13:24] Bejiita Imako: here electricity flip a switch relying on pure math
[13:24] Bejiita Imako: pure
[13:24] Nigel Qissinger: thank you, very interesting points you made
[13:24] herman Bergson: that is the point Mick.....
[13:24] Qwark Allen: the language that brain use to process and transport information is already decoded, and can be simulated by a computer
[13:24] Gemma Allen (gemma.cleanslate): Yes-ah!
[13:24] Sybyle Perdide: its the same like the comparison with bike and care or flood and bomb, isn#T it?
[13:24] herman Bergson: functionalism doesn't formulate ontological claims...
[13:24] Gemma Allen (gemma.cleanslate): Yes-ah!
[13:24] Gemma Allen (gemma.cleanslate): to mick i mean
[13:25] herman Bergson: Well..the next debate will be Artificial Intelligence...
[13:25] Bejiita Imako: acomputer can simulate but never do the same cause sure it can replicate but it does it in a completley different way then a brain
[13:25] herman Bergson: does it exist?
[13:25] herman Bergson: Well Bejiita
[13:26] herman Bergson: our computers of today, yes....
[13:26] Sybyle Perdide: the problem is the reduction
[13:26] herman Bergson: but that is not the point
[13:26] Sybyle Perdide: a computer can be intelligent, but he cannot taste oranges
[13:26] Nigel Qissinger: but IF a computer did DO the same stuff as a brain, it wouldn't matter if it was made of metal or flesh
[13:26] herman Bergson: from a functionalist point of view it isnt relevant how that computer is constructed...
[13:26] Qwark Allen: heheeh why not sybyle?
[13:27] herman Bergson: when it does the same as what our mental states do...it must be conscious too
[13:27] Lizzy Pleides: he can taste it but he can't enjoy it
[13:27] Sybyle Perdide: because artificial intelligence is not the same like tasting oranges
[13:27] Qwark Allen: probably in the future they will taste it better then us, cause will have better sensors
[13:27] Nigel Qissinger: it can not taste oranges only because it does not have a tasting organ. If we built a tongue that could send the data about the orange to the computer, then it could taste
[13:27] Sybyle Perdide: yes Lizzy
[13:27] Sybyle Perdide: he can analyze
[13:27] Mick Nerido: When your computer crashes does it feel pain? lol
[13:27] Raphael Lazarno is Offline
[13:28] Qwark Allen: they have sensors, so taste is not a issue
[13:28] Lizzy Pleides: mine does
[13:28] Bejiita Imako: ia computer could sense what it does maybe but all a computer see is a binary string of 1 and 0 in a pure mathematical way
[13:28] Qwark Allen: to have pleasure with it, is another story
[13:28] Mot Mann is Offline
[13:28] Bejiita Imako: in the future maybee but how would a such cpu operate then?
[13:28] herman Bergson: Welll In Sybyles words I hear Nagel's "what is it like to be a bat..?"
[13:28] Qwark Allen: in a way, we feel pleasure with things, because there is release of endorfines
[13:28] Sybyle Perdide: blushes
[13:28] Bejiita Imako: can not rely solely on transistor base binary math
[13:28] Qwark Allen: in the brain
[13:29] herman Bergson: You must not talk about computers as we understand them now...
[13:29] Bejiita Imako: interesting idea at least
[13:29] Bejiita Imako: can machines think
[13:29] Qwark Allen: in some damaged brains there is no endorfines
[13:29] herman Bergson: you must talk about functions they perform
[13:29] ewaslogos.blogspot.com (ewa.aska) is Online
[13:29] Bejiita Imako: hmm
[13:29] Sybyle Perdide: perform is the right word I think
[13:30] Qwark Allen: we have very limited sensors
[13:30] Sybyle Perdide: if a computer starts to do more than performing, then the difference is shrinking dangerously
[13:30] Qwark Allen: as a specie
[13:30] herman Bergson: Functionalism was the step towards a computational model of the mind
[13:31] Mick Nerido: Functionalism means if it functions the same way no matter the construct the results are the same.
[13:31] herman Bergson: yes Mick.....
[13:31] Sybyle Perdide: but it is an aim-oriented thinking
[13:31] Sybyle Perdide: like a black box
[13:31] herman Bergson: So our idea of a computer can be completely wrong and in a 10000 years maybe a computer is made of water ㋡
[13:32] Qwark Allen: that will happen before
[13:32] Qwark Allen: maybe in 50 years
[13:32] Qwark Allen: with quantum computers
[13:32] Bejiita Imako: a futore machine might be more similar but not today's machines
[13:32] herman Bergson: yes Qwark..then we are all fishes in the ocean again, back to our roots ^_^
[13:32] Qwark Allen: with 1 zero and -1
[13:32] Lizzy Pleides: and when we drink it we suddenly can speak chinese
[13:32] Sybyle Perdide: laughs
[13:33] Qwark Allen: there are already quantum computers done
[13:33] herman Bergson: I always dream of that Lizzy....
[13:33] Bejiita Imako: cause although they perform sort of same things their INTERNAL functioning is different, carbureators work internally the exact same principle
[13:33] Lizzy Pleides: me too
[13:33] Qwark Allen: they use atoms the atoms spin to save information
[13:33] Bejiita Imako: but a brain and a computer of today does not even both can produce and process information
[13:33] herman Bergson: Like in the Matrix....put in the disk martial arts....and poof...there you go
[13:33] Sybyle Perdide: but..are this technical details relevant?
[13:34] herman Bergson: You must look at it in an other way Bejiita....
[13:34] Bejiita Imako: i think so, the internal principle of operation i think is important, don't know
[13:34] herman Bergson: You must imagine a computer that DOES all the things our brain does
[13:34] Qwark Allen: yes, you should read about the new technologies, and realize, that the utopia of yesterday, will be the present, in a near future
[13:34] herman Bergson: and from there you start analyzing the consequences...
[13:35] Bejiita Imako: but who knows what ex a quantum computer could do and they also talk about dna based machines
[13:35] Bejiita Imako: a such computer might be able to do that
[13:35] Qwark Allen: they have 3 states, zero one and minus one
[13:35] Mick Nerido: a computer would not need all our lower brain functions to be conscious, or would it?
[13:36] Qwark Allen: just need the same capacity of processing information
[13:36] herman Bergson: I really wouldn't know Mick
[13:36] herman Bergson: because this implies already a real definition of consciousness
[13:36] Qwark Allen: their speed duplicate every 18 months
[13:37] herman Bergson: And consciousness is our biggest problem in the philosophy ogf mind
[13:37] Qwark Allen: in 25 years they will have the same rate of processing information as our brain
[13:37] Qwark Allen: they AI will born for sure
[13:37] Bejiita Imako: ah
[13:37] Bejiita Imako: hmm AI is an interesting thing for sure
[13:37] Qwark Allen: thats what we are talking about today
[13:38] Gemma Allen (gemma.cleanslate): but then we get into the issue of feelings
[13:38] Qwark Allen: will be possible in a very near future
[13:38] herman Bergson: You can read about functionalism....
[13:38] Sybyle Perdide: do we talk about constructing new iPads?
[13:38] herman Bergson: I am still not sure how to deal with it and its consequences...
[13:39] herman Bergson: Yes Gemma...
[13:39] herman Bergson: feelings , beliefs, desires.... consciousness...
[13:39] Gemma Allen (gemma.cleanslate): right!
[13:39] herman Bergson: big big hurdles still to take
[13:40] herman Bergson: So....for the time being....
[13:40] Qwark Allen: lets hope we don`t end like in terminator or in battlestar galactica
[13:40] herman Bergson: let's assume that mental states can be multiply realized.....
[13:40] herman Bergson: Well Qwark...I loved the movies ㋡
[13:41] Gemma Allen (gemma.cleanslate): ♥ LOL ♥
[13:41] herman Bergson: they had a happy end ^_^
[13:41] Sybyle Perdide: what about tragic ends?
[13:41] Sybyle Perdide: are they wrong
[13:41] Sybyle Perdide: ?
[13:42] Gemma Allen (gemma.cleanslate): those two movies had a happy end
[13:42] herman Bergson: smiles
[13:42] Gemma Allen (gemma.cleanslate): others do not
[13:42] Mick Nerido: endings are personal not objective
[13:42] Bejiita Imako: but that's because the manus often have a happy end
[13:42] herman Bergson: We have the greek tragedies...
[13:42] Bejiita Imako: in reality evil often wins at least for very long
[13:42] Bejiita Imako: just look at all terror and wars all over the world that never ends
[13:43] Bejiita Imako: and all greediness
[13:43] Sybyle Perdide: but would a computer be able to feel the tragic.. not only recognizing that the end was not good for all because one is dead?
[13:43] Bejiita Imako: bank directors and such
[13:43] herman Bergson: But even the end of the tragedy was regarded as a happy end..offering the katharsis to the audience
[13:44] Sybyle Perdide: but Orestes is unlucky and dead at the end
[13:44] Mick Nerido: Tragedies make you feel lucky it did not happen
[13:44] herman Bergson: yes, he is, but the audience experience the meaning of it
[13:44] Sybyle Perdide: but I was always on his side
[13:44] Bejiita Imako: ah
[13:45] herman Bergson: Well....Let's investigate if we can have an Orestes computer in the future...:)
[13:45] Mick Nerido: Thanks for a great class! BYe
[13:45] Bejiita Imako: ㋡
[13:45] Sybyle Perdide: yay
[13:45] herman Bergson: Therefor...thank you all for your participation...
[13:45] Sybyle Perdide: thats a good idea
[13:45] Bejiita Imako: interesting
[13:45] Bejiita Imako: ㋡
[13:45] Gemma Allen (gemma.cleanslate): ♥ Thank Youuuuuuuuuu!! ♥
[13:45] herman Bergson: Class dismissed ㋡
[13:46] Bejiita Imako: ok cu next time

Enhanced by Zemanta

Wednesday, June 10, 2009

6c Can my computer think?

As promised and on Samuel's request we'll address the problem of "Other Minds" today. As such it is already a serious philosophical issue, but in relation to artificial intelligence, it even becomes a more serious problem.

In the former lecture I presented three arguments. Let me repaet them:

A.
1. Thought is some kind of computation.
2. Digital computers can perform all possible computations.
therefore,
3. Digital computers can think.

B.
1. Thought is some kind of conscious experience.
2. Machines can't have conscious experiences.
therefore,
3. Machines can't think.

C.
1. Thoughts are specific biological brain processes.
2. Artificial computers can't have biological brain processes.
therefore,
3. Artificial computers can't think.

As you see, all depends on the definition of thought. Only reasoning A. will confront us with the problem of other minds in relation to machines.

I suppose we can agree at least on one thing: a thought is a mental state, a mental state we are aware of. So eventually we should have to admit that computers can have mental states.

A thing often seen in Science Fiction movies and books. A nice fellow in this category is Data from the Enterpriseor in HALL from Space Odyssee 2001.

We have an other matter to deal with: intelligence. Could we at least define it as a quality level of thinking, where unintelligent and intelligent behavior differ in the quality of how for instance a problem is solved?

So, a mind in the computer of the future? An other mind, not my mind. Of course in our daily life we are certain that other human beings have their own minds, like I have mind.

But we assume a lot of things and it works well, till we approach all these assumptions philosophically. Then we have to admit that the alleged certainty is questionable.

In this situation what it is all about is:
how do we justify that certainty we have of knowing that other people (and in the far future maybe humanoid robots) have their own minds ?

In introducing "methodic doubt" into philosophy, Descartes created the backdrop against which solipsism subsequently developed and was made to seem, if not plausible, at least irrefutable.

Solipsism is the theory that the only thing we are certain of is the content of our own mind and that our knowledge of our mental states is private. Noone but me can experience for instance my headache.

But it works also the other way around. Neither can I experience the mental states of another person.

What then of my knowledge of the minds of others? On Locke's view there can be only one answer: since what I know directly is the existence and contents of my own mind,

it follows that my knowledge of the minds of others, if I am to be said to possess such knowledge at all, has to be indirect and analogical, an inference from my own case. This is the so-called "argument from analogy" for other minds.

I wont bother you with all arguments against this argument from analogy, but one is interesting. If you apply scientific standards and you observe here,

that we make a generalisation based on only one observation, the observation of my own mind, you can imagine that this argument from analogy isnt a strong one epistemologically.

Assuming that the argument from analogy is unacceptable, the most obvious alternative is to adopt some form of that variety of behaviorism according to which all psychological expressions can be fully understood in terms of behavior.

We have seen that before..... a computer shows the same behavior as a human, so we may call it intelligent. But does a translation of all psychological expressions in terms of behavior do the job?

That means does it offer the epistemological justification to claim with certainty, that my computer can think? Of course, we'll come up with the next criticism.

It is implausible to give a behavioristic account of some first-person psychological statements. When, for example, I say that I have a terrible pain, I do not say this on the basis of observation of my own behavior and the circumstances in which I am placed.

How far did we get today? Will there be a moment in the far future that I ascribe a conscious mind to a computer, or say at least that my computer thinks?

If you look at the epistemological problems of "Other Minds" I am not so sure. We are not even capable to come up with a univocal solution for the "Other Minds" issue. And that problem wasnt raised because of artificial intelligence, but by the observed intelligence in my fellowmen.


The Discussion

herman Bergson: So much on AI
Gemma Cleanslate: it does make one wondr about the future tho
herman Bergson: In what way Gemma?
Gemma Cleanslate: what science will come up with in the area of computers
Gemma Cleanslate: and how much they will progress
Gemma Cleanslate: toward an AI
Gemma Cleanslate: it is all math
Ze Novikov: electro/biologic human computer interface
herman Bergson: Well...the AI community came up with high expectations....
Gemma Cleanslate: yes ZE
Paula Dix: ive been talking with my friends here about that, and we came with an idea
herman Bergson: But they sized down the big dreams
Ze Novikov: computer will augment our physical processes
Paula Dix: if we find some alien with ships and technology we will be certain they can think, even if probably in ways different from ours
Paula Dix: in the same sense, when we have computers that have a level of processing "thinking" similar to ours, we will accept that they think
Ze Novikov: we have it in a crude form with artifical limbs
Paula Dix: even if we have no idea what thinking is
herman Bergson: There definitely will be a computer/central nervous system interaction in the future
Ze Novikov: yes
herman Bergson: But that is something different from AI
Paula Dix: lol thats the idea of my science fiction lover friend, the computers will be us :)))
Zen Arado: what about self awareness?
Zen Arado: wouldnt that be a criterion?
herman Bergson: Yes indeed Zen....there you run into fundamental philosophical problems again
herman Bergson: Hume even didnt discover a Self in himself
Zen Arado: yes there isnt one to be aware of I guess :)
herman Bergson: So self awareness is already questioned by....what are you aware of
Cailleach Shan: Are we assuming that 'computers that can think' are a bad thing?
Paula Dix: like that Denett text about mind and gravity center
Paula Dix: (if anyone want this text, i have it here)
herman Bergson: Well Cailleach.....computers that can think...it would create serious ethical problems for instance..
herman Bergson: Is a thinking computer a new species?
Paula Dix: i believe they think already and that this is good :))))
Alarice Beaumont: i think they do that right now
herman Bergson: Does ethics apply to it or may I turn it off at will for instance..or is that murder?
Zen Arado: we are afraid they wouldnt share our moral sense
Alarice Beaumont: but only in the meanings of the person which program them
Gemma Cleanslate: or throw it out the window even worse
Ze Novikov: yes
Zen Arado: that was the prob with Hal :)
Paula Dix: i guess its not unethical to turn them off, since they are designed to do so, it wont damage them like turning us off
Cailleach Shan: That was my next question Herman. Can computers turn themselves back on?
herman Bergson: Ever heard of the Laws of Robotics, formulated by Isaac Asimov?
Zen Arado: yes
Paula Dix: oh yes im reading Cave of Steel series :)))
Cailleach Shan: Nope
herman Bergson: Cool
Zen Arado: old sci fi
Zen Arado: :)
herman Bergson: Yes it is Zen...but so is the bible and people still read it :-)
Cailleach Shan: lol Bible Sci fi
herman Bergson: I mean the 'old' feature...
herman Bergson: not the SF character..lol
Zen Arado: sure ...but reminds me of my youth
Zen Arado: :)
Paula Dix: first book is dated, but ok if you read it like a steampunk thing, but second is good
herman Bergson: Yes....me too
Zen Arado: wasnt criticising it he came up with good ideas I think
Cailleach Shan: Re. the 'ethics' question. I think we have made up ethics ourselves, so eventually we would make up a new set to incorporate thinking computers.
herman Bergson: Well he made the robot in fact a servant of men
herman Bergson: not an autonomous being
Gemma Cleanslate: and they are supposed to be
Zen Arado: yes they mustnt harm a human
Zen Arado: 1st law?
herman Bergson: For instance...
herman Bergson: Yes
Paula Dix: ah, yes, then there is the bicentenary man book/movie, also by asimov, discussing when they turn into men
herman Bergson: So what when an intelligent computer is regarded as an autonomous being?
Zen Arado: when they have emotions ?
herman Bergson: yes....even more complicated
Cailleach Shan: We have too many autonomous beings on the planet already.
Paula Dix: its like to discuss animal rights
herman Bergson smiles
Zen Arado: its the lack of emotion that scares us maybe
Paula Dix: a chicken can think?
Paula Dix: they have emotions...
Ze Novikov: chickens?
Cailleach Shan: What is scary about lack of emotion?
Paula Dix: lol any animals
Paula Dix: just got some down on the scale
Alarice Beaumont: one might decide wrong
Zen Arado: lack of compassion
Alarice Beaumont: no emotion.. no feelings .. no ethics
herman Bergson: Well.....I think, that we'll never see autonomous computers as independent beings
Zen Arado: you couldnt make up rules to substitute for lack of compassion
Zen Arado: I dont think
herman Bergson: I even dont believe in a thinking computer
Paula Dix: herman so you think thinking is a biological only thing?
herman Bergson: So I would answer the question Can my computer think? with NO
Gemma Cleanslate: hmmmm
herman Bergson: Yes....it is a biological thing primarily
Paula Dix: ok
Zen Arado: trouble is we dont really know what thinking is ....as you said earlier Herman
Paula Dix: yes, maybe computers are other way of thinking
herman Bergson: Exactly Zen.....we even cant find the right answers on our own philosophical questions
Paula Dix: and i believe we need an ethics toward them, same as toward other animals
Gemma Cleanslate: ow wow
Paula Dix: just in case :))
herman Bergson: I just turn it off at will Paula ^_^
Alarice Beaumont: lol
Ze Novikov: lol
Paula Dix: oh, but that is ok for them
Gemma Cleanslate: i think i will leave mine on from now on lo
Zen Arado: you cant turn off the internet though :)
Alarice Beaumont: lol
Paula Dix: they will even ask to be turned off from time to time :))))))))
Gemma Cleanslate: just in case
Zen Arado: people are worried about that
Alarice Beaumont: ah.. but you can quit the connection!
Gemma Cleanslate: well if it asks ok
Paula Dix: lol Gemma
herman Bergson: Yes Gemma and next morning it will say I think, so I am :-)
Gemma Cleanslate: lol
Ze Novikov: lol
Gemma Cleanslate: yes
Paula Dix: My friend Kore, who was here some times, is a neuroscientist and he is working with AI systems
Alarice Beaumont: does he say that it's possible to make machines think?!
Zen Arado: yes Paula ?
Paula Dix: he has this problem, if he can or not turn the AI that has emotions off
herman Bergson: Yes there are AI systems....
Cailleach Shan: Now there's an interesting thought... if we are 'turned off' from existence are we still there somewhere waiting to be turned back on.
Paula Dix: he says they can think and are alive
Gemma Cleanslate: ah
Gemma Cleanslate: too bad he is not here now
Paula Dix: or at least these ones he is making :))
herman Bergson: If we were computers with a powerswitch, Cailleach
Alarice Beaumont: isn't that called coma?
Paula Dix: i asked him to come, but he couldnt... :(
Gemma Cleanslate: too bad
Ze Novikov: the pod people
Zen Arado: keeps coming back to the personal identity issue doesnt it?
Gemma Cleanslate: yes it does
Gemma Cleanslate: always
Paula Dix: he is developing AIs that can learn, and thats a big step toward awareness i guess
Zen Arado: is there an 'I' in the computer
herman Bergson: The AI systems are only applicable in a limited world
Zen Arado: or in us even
Cailleach Shan: It's a very good way of looking at our fears around 'control'.
herman Bergson: Oh..Personal Identy...... an other headache chapter in philosophy
Paula Dix: lol yes Caill and all these people that believe computers can think also say they will be better than us in some time
Paula Dix: well maybe not all :)))
herman Bergson: Well...We dont need to be afraid of thinking computers in our lifetime
Paula Dix: meanwhile, police and students and teachers confrontation on the main university here... :((( i wonder if they can think
Cailleach Shan: mmmmm.... not too sure about that Herman.... look how quickly technology advances.
Gemma Cleanslate: that is true too
Samuel Okelly: :)
Gemma Cleanslate: so so fast in computer world
Paula Dix: yes, most people lost jobs to computers already
Paula Dix: many not most
Gemma Cleanslate: have to go now
Gemma Cleanslate: bye!
Paula Dix: bye!
Cailleach Shan: cu Gem.
herman Bergson: Bye GEmma :-)
Zen Arado: bye Gemma
Alarice Beaumont: bye Gemma :-))
Samuel Okelly: tc gem
herman Bergson: I think we may conclude our session on this question and move on to the next question
Ze Novikov: ty herman
Ze Novikov: :))
Zen Arado: is/was interesting Herman
herman Bergson: Interesting...... Believing and reasonable..:-)
Ze Novikov: bb everyone until next week :))
herman Bergson: thank you
Zen Arado: bye Ze
herman Bergson: and thank you all for your participation
Cailleach Shan: Nice one Herman..... me and my computer thank you.
Paula Dix: yes very nice discussion :))))
Samuel Okelly: apologies for being so late herman :(
herman Bergson: Give my reagrds to your computer, Caileach...:-)
Cailleach Shan: lol ta.
Paula Dix: our computer is turning itself off at random... we need to call the doctor!!
herman Bergson: Things happen, Samule :-)
Alarice Beaumont: lol
Cailleach Shan: Bye all.
Samuel Okelly: ill look forward to catchinh up with the transcript
Samuel Okelly:
Samuel Okelly: †
Samuel Okelly: † (( take care everyone )) †
Samuel Okelly: †
Samuel Okelly:
Zen Arado: I have to go too
Alarice Beaumont: oh.. need to go
Zen Arado: bye
herman Bergson: Ok Samuel....will be in the blog soon
Alarice Beaumont: i will call you for a game of chess Herman ;-)
Alarice Beaumont: everyone have a nice nite / day :-))
Paula Dix: bye
herman Bergson: Ok...:-)
Alarice Beaumont: bye
Paula Dix: back to building...
herman Bergson: Bye Alarice
herman Bergson: Ok Paula
herman Bergson: Happy building:-)