Showing posts with label Chinese room. Show all posts
Showing posts with label Chinese room. Show all posts

Friday, December 9, 2011

366: The Brain and Artificial Intelligence

Like science is eager to discover the secret of life, so are the supporters of the Computational Theory of Mind eager to find artificial intelligence in their computer.


Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it.
AI textbooks define the field as "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.

John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines."

John Searle coined the difference between STRONG AI and WEAK AI. Weak AI is the kind of behavior of computers, as if they seem intelligent. Well, maybe you can say, that they behave intelligent…matter of definition, I would say.

Maybe you have tried to chat with Elbot or Eliza. They can give you the impression of being understanding and intelligent. However they aren't. They just shuffle symbols not understanding a single word you type.

Yet this weak artificial intelligence is used nowadays in many situation. It emulates, what our mind does. You even find it in modern cars with its sensors.

But from the 1950s on higher hopes were put on the development of strong artificial intelligence, From then on it was always "Just wait. The next generation of computers will me even more powerful. They will do the job!"

However, we still haven't reach that stage. Just a sidetrack, a thought….. when we succeed in making a mind appear in a computer, does that mean we are forbidden to ever turn it off again? Wouldn't turning it of be murder, the killing of an individual mind?

Well, don't worry, the Chinese Room argument, which I discussed in the previous lecture has shown the weak spot of strong artificial intelligence.

Computer programs are formal, meaning using only syntactical rules to manipulate data, symbols. Our mind, however, has content. Words are not just symbols to the mind. We ascribe meaning to these symbols .

Thus we have to conclude, that computer programs are not sufficient for nor identical with minds.

Strong AI researchers have attempted to program digital computers to understand simple stories. Well known research of that kind dates back to 1977.

For example, the computer might be expected to understand a simple story about eating in a restaurant. The computer is given three kinds of input:
1.The story.

2. Some general information about restaurants and the kinds of things that typically occur there. For example: people eat in restaurants; people order their food from waiters; people are usually required to pay for what they have ordered; and so on.Researchers in strong AI call this information a 'script'.

3. Some questions about the story.
lf the scientists have managed to program the computer properly then, according to strong AI,the computer will not merely answer the questions correctly, it will literally understand the story.

However, even when I would become a super expert in answering questions in Chinese by shuffling symbols according to syntactical rules, I never will understand the questions.

When we get the questions in English we are aware what the questions mean, while I am not aware of the meaning of the chinese symbols I handle.

And here we hit the most difficult issue of our long quest. We must conclude that the Chinese Room setup (shuffling symbols according rules) is insufficient for conscious awareness of the meaning of the questions.

Or stated in a more general way: computation is insufficient for consciousness. There is more to the mind than you can emulate with computer programs.


The Discussion

[13:25] herman Bergson: Thank you...
[13:25] Qwark Allen: ::::::::: * E * X * C * E * L * L * E * N * T * ::::::::::
[13:25] herman Bergson: If you have any questions...you have the floor
[13:25] Carmela Sandalwood: Seems to me you leave out an important aspect of consciousness: it is part of an environment
[13:25] Qwark Allen: at least with the technology of today
[13:26] herman Bergson: that is what they always say Qwark.. ㋡
[13:26] Mick Nerido: Is it theoretically possible for computer brains to be concious?
[13:26] herman Bergson: What do you mean by 'part of an environment, Camela?
[13:26] Qwark Allen: computers are around at few decades
[13:26] Carmela Sandalwood: The Chinese room is not reacting to an environment: it is only suffling symbols
[13:26] Gemma Allen (gemma.cleanslate): one question would we be able to turn it off?? i say no because it would not let us if it becomes conscious
[13:27] Bejiita Imako: yes another way to see it
[13:27] herman Bergson: It is reacting on the questions that come in...that is an environment
[13:27] Qwark Allen: and they will achieve our rate of of processing information in 25 years
[13:27] Carmela Sandalwood: so, suppose I say 'is there a flower in the garden?'...simply shuffling symbols can't answer that even if there is an algorithm
[13:27] Qwark Allen: lets see by then, what will be the question by then
[13:27] Carmela Sandalwood: there has to be sensory input
[13:27] Bejiita Imako: for us to understand what the computer puts out you must convert it to analog signals first
[13:27] Bejiita Imako: and that the computer can never understand it can justtunderstand 1 and 0
[13:28] Bejiita Imako: and 1 and 0 is as meaningless to us as the analog is to a computer
[13:28] Carmela Sandalwood: that's way too simplistic Bejiita
[13:28] herman Bergson: Wait....
[13:28] Gemma Allen (gemma.cleanslate): i do not think artificial intelligence will ever take over but it may come close!
[13:28] Carmela Sandalwood: neurons only 'understand' on and off
[13:28] herman Bergson: One issue at a time....
[13:28] herman Bergson: Furst Qwark....
[13:29] Qwark Allen: in the next decades we`ll have quantic computers, that have 1, 0 and also -1
[13:29] herman Bergson: even in 25 years the computer will not have changed Qwark...it is a syntactical machine....
[13:29] Bejiita Imako: hmm the question is how does our brain store information and do the brain have some sort of A7D D7A converter to put meaning to everything
[13:29] Bejiita Imako: or do we interpret it directly?
[13:29] herman Bergson: when there is a machine that generates consciousness it will not be called a computer....
[13:29] Gemma Allen (gemma.cleanslate): that makes sense
[13:30] Qwark Allen: when they decoded the language brain use for comunicate between neurons, and it`s not that diferent
[13:30] herman Bergson: Then the sensory input question of Camela...
[13:30] Carmela Sandalwood: well, we might call it a robot or artificial intelligence...or Robert
[13:30] Mistyowl Warrhol: But isn't the brain a computer? It processes data.
[13:30] Bejiita Imako: compuetr means calculator and thats what a computer does, averything is just binary math to a computer
[13:30] Qwark Allen: its between charged positively, or negatively
[13:30] Qwark Allen: like 0 and 1
[13:30] herman Bergson: I don't think that it makes much difference to the Chinese room argument...
[13:30] Gemma Allen (gemma.cleanslate): ☆*¨¨*<♥*''*BEJIITA!!! *''*<♥:*¨¨*☆
[13:30] Gemma Allen (gemma.cleanslate): but with emotion and feeling
[13:30] Bejiita Imako: but we don't use mathematical formulas to listen to music
[13:30] herman Bergson: We just built a room with cameras and microphones and so on....
[13:30] Qwark Allen: in a way, when computers have the -1 in their language, maybe they will be ahead of us
[13:31] herman Bergson: the basic principle stays the same....
[13:31] Qwark Allen: cause they will have a state that we cannot have
[13:31] Carmela Sandalwood: Suppose your syntactical rules require that you look outside at times and based on what you see, there rules are different
[13:31] Qwark Allen: the minus one
[13:31] Mick Nerido: We are carbon based computers are sillicon based
[13:31] herman Bergson: yes like we have computer programs that can recognize faces
[13:31] Carmela Sandalwood: I don't see what happens in our brain as much different than what happens in a computer
[13:31] herman Bergson: The basic issue here is.....there is output form the computer...
[13:32] Qwark Allen: i think we are having a narcisist approach
[13:32] Qwark Allen: like we are the only ones
[13:32] herman Bergson: but this output does not imply that the computer has any understanding of what it is doing..
[13:32] Qwark Allen: but, i believe AI will come
[13:32] Carmela Sandalwood: 'understanding' is about reacting appropriately to an environment so that you maximize the chances of survival or meeting other goals
[13:32] Qwark Allen: with some capacities of us
[13:32] Carmela Sandalwood: how do you know I have understanding? how do I know you do?
[13:32] herman Bergson: whether it is recognizing faces, checking spelling, doing calculations...
[13:32] Qwark Allen: movies like blade runner will be like a vision of the future
[13:32] herman Bergson: it is all the same to it
[13:33] CONNIE Eichel whispers: thanks gemma :)
[13:33] herman Bergson: Yes Qwark....we love such fantasies
[13:33] Bejiita Imako: yes its just calculating binary math
[13:33] Carmela Sandalwood: and it is all the same to our neurons
[13:33] Gemma Allen (gemma.cleanslate): :-)
[13:33] Sybyle Perdide: you say, the difference is in understanding?
[13:33] Carmela Sandalwood: meaning doesn't exist at the neural level (or at the level of transistors)
[13:33] herman Bergson: I wouldnt say that Carmela...
[13:34] Bejiita Imako: and also as i said before a cpu can only understand some basic instructions, the compiler have to build the machine co with just these basic commands or the computer wont understand it
[13:34] Bejiita Imako: the x 86 instruction set
[13:34] herman Bergson: slow down Bejiita!! pla
[13:34] herman Bergson: plz
[13:34] Carmela Sandalwood: and a neuron only react to certain stimuli
[13:34] Carmela Sandalwood: so?
[13:34] Bejiita Imako: ah
[13:34] Mistyowl Warrhol: One difference between brain and computer, we calculate by also using emotional reactions learned over time.. using our 5 senses (6 according to some) Can we teach computers to use emotions?
[13:34] herman Bergson: there is one thing we still haven't discussed...and that is consciousness....
[13:35] Carmela Sandalwood: and those emotions are calculated by the brain to react
[13:35] herman Bergson: The awareness of our existence...
[13:35] Carmela Sandalwood: there are algorithms there also
[13:35] Carmela Sandalwood: awareness is an internal representation: data
[13:35] herman Bergson: Well that is one of those issues Misty...
[13:36] herman Bergson: What does that mean Carmela?
[13:36] Carmela Sandalwood: our self-awareness is simply an internal collection of data representing our internal state...it isn't perfect, but it exists and is ultimately binary in character
[13:36] herman Bergson: A second point is the fist person awareness...
[13:37] herman Bergson: Here we have a problem Carmela....
[13:37] herman Bergson: for there are no two mental states of self awareness alike in two different persons...
[13:37] Qwark Allen: ㋡ ˜*•. ˜”*°•.˜”*°• Helloooooo! •°*”˜.•°*”˜ .•*˜ ㋡
[13:37] Qwark Allen: Hey! HAO
[13:37] Carmela Sandalwood: of course not...the systems are different
[13:37] Carmela Sandalwood: so?
[13:37] herman Bergson: What it is like to be me...is a special mental stat for me....
[13:38] herman Bergson: nobody in the whole world has that....
[13:38] Carmela Sandalwood: I'm not so sure that the mental state of 'being you' will be forever limited to you
[13:38] Carmela Sandalwood: it may be possible to transfer that data in the future
[13:38] herman Bergson: so my mental state of "what it is like to be me?" has a special property no other mental state in the world has
[13:38] herman Bergson: except my own of course....
[13:39] Gemma Allen (gemma.cleanslate): :-)
[13:39] Carmela Sandalwood: the question is whether the data can be transfered and used by the recipient
[13:39] Gemma Allen (gemma.cleanslate): that horse is hungry
[13:39] herman Bergson: We have to answer the question how to understand this first person property
[[13:40] herman Bergson: well....we almost have seen all attempts to understand the mind....
[13:40] herman Bergson: soon we'll get to the issue of what makes the mind: consciousness
[13:41] Mistyowl Warrhol: Ok, if we each have our own thoughts, unique to us, where is that stored. If it is in tissue, does any of that transfer in situations of organ donations?
[13:41] herman Bergson: My conclusion of today is that I wouldn't bet on consciousness in machines
[13:41] Carmela Sandalwood: it might if you transferred brains, but not likely otherwise
[13:41] herman Bergson: That is a fascinating question Misty....
[13:41] Bejiita Imako: machines work way too different from us
[13:42] herman Bergson: Because the greek thought that the mind was in the heart....
[13:42] Hao Zaytsev: hehe
[13:42] herman Bergson: the Egyptians also didn't have a high esteem of the brain...they threw it away when mummifying a pharaoh…
[13:42] Mistyowl Warrhol: There are some cases in which ppl have seem to remember small data from someone that donated.. But that is a topic for another time :-)
[13:42] Carmela Sandalwood: so they were wrong...it happens
[13:43] Mick Nerido: There have be recent mouse brain experiments that shows memories could be transferred
[13:43] herman Bergson: Aristotle thought that the brain was an organ to cool the blood
[13:43] herman Bergson: Well....
[13:43] Carmela Sandalwood: Aristotle was also wrong about physics
[13:43] Mistyowl Warrhol: Some think the brain is something to play with :-)
[13:43] herman Bergson: We have the believe that the mind is (in) the brain....
[13:43] Gemma Allen (gemma.cleanslate): ♥ LOL ♥
[13:43] herman Bergson: In a way...as if the body doesn't play a part in it at all
[13:44] Carmela Sandalwood: yes, that is also simplistic Herman....the body is required for the sensory input at least
[13:44] herman Bergson: Some people believe that donor organs also contain something of the donating person....not just tissue
[13:45] Gemma Allen (gemma.cleanslate): i do'nt
[13:45] Mick Nerido: Frankinstein
[13:45] Sybyle Perdide: in a practical sense its true
[13:45] Carmela Sandalwood: I'd have to see the data...but that also doesn't make it non-mechanical
[13:45] Carmela Sandalwood: more specifically chemical
[13:45] Gemma Allen (gemma.cleanslate): we will never really agree on this :-)
[13:46] Mistyowl Warrhol: I think the biggest difference between brain and computers.. the brain is being bathe by chemicals from around the body which in turn effects how the brain responds. Whereas, the computer.. just uses data that was inputed, even though it can reassemble that data by design.
[13:46] herman Bergson: That isn't necessary Gemma....
[13:46] herman Bergson: as long as we keep thinking about it and questioning it..
[13:46] Gemma Allen (gemma.cleanslate): Yes-ah!
[13:46] Hao Zaytsev: aliens
[13:46] Carmela Sandalwood: MistyOwl: that may be, but even that can be represented by appropriate computing internals
[13:46] herman Bergson: Nice gus Hao.. ㋡
[13:46] Gemma Allen (gemma.cleanslate): herman it is time to put the christmas trees in the yard there
[13:47] Hao Zaytsev: hehe
[13:47] Gemma Allen (gemma.cleanslate): and snow
[13:47] Bejiita Imako: ah yes
[13:47] Bejiita Imako: really strange no snow here yet
[13:47] Bejiita Imako: and 7 deg warm outside now
[13:47] herman Bergson: Oh my ...so true Gemma.....
[13:47] Gemma Allen (gemma.cleanslate): 3 weeks till christmas
[13:47] herman Bergson: SO thank you all again for your interest and participation today
[13:47] Gemma Allen (gemma.cleanslate): ♥ Thank Youuuuuuuuuu!! ♥
[13:47] herman Bergson: I have to dismiss the class
[13:47] Bejiita Imako: really interesting
[13:47] Hao Zaytsev: why is a horse on the floor?
[13:47] Bejiita Imako: ㋡
[13:48] Gemma Allen (gemma.cleanslate): no clue
[13:48] herman Bergson: Have to put up my Xmas tree to keep Gemma happy
[13:48] Gemma Allen (gemma.cleanslate): trees
[13:48] Bejiita Imako: ㋡
[13:48] Qwark Allen: .-)))
[13:48] Sybyle Perdide: thank you Herman..it was great
[13:48] Mistyowl Warrhol: Carmela, the chemical of the body are altered by changes in our enviroment and our emotional response to that. Cant we reproduce that mechically?
[13:48] Hao Zaytsev: damn
[13:48] Hao Zaytsev: nice trip
[13:48] Bejiita Imako: tis was really great and interesting
[13:48] Gemma Allen (gemma.cleanslate): Bye, Bye ㋡
[13:48] Gemma Allen (gemma.cleanslate): fir biw
[13:48] :: Beertje :: (beertje.beaumont): it was very interesting Herman, thank you
[13:48] CONNIE Eichel: bye gemma
[13:48] Carmela Sandalwood: Yes, among other ways, but changing the electrical or magnetic environment
[13:48] Bejiita Imako: well must head back now but cu soon again
[13:49] Bejiita Imako: ㋡
[13:49] Qwark Allen: ¸¸.☆´ ¯¨☆.¸¸`☆** **☆´ ¸¸.☆¨¯`☆ H E R MA N ☆´ ¯¨☆.¸¸`☆** **☆´ ¸¸.☆¨¯`
[13:49] Qwark Allen: ty
[13:49] Hao Zaytsev: im excited
[13:49] Qwark Allen: was very good as usual
[13:49] Bejiita Imako: YAY! (yay!)
[13:49] Carmela Sandalwood: and that could even be in a feedback loop controlled by the CPU
[13:49] CONNIE Eichel: well, bye bye all, great class professor :)
[13:49] Carmela Sandalwood: TY Herman
[13:49] CONNIE Eichel winks
[13:49] herman Bergson whispers: thank you CONNIE
[13:50] CONNIE Eichel: :)
[13:50] Mistyowl Warrhol: Very interesting and fun. I have really enjoyed this and meeting everyone :-)
[13:50] herman Bergson: Nice, Misty ㋡
[13:50] herman Bergson: I do this now for more than 5 years
[13:50] Carmela Sandalwood: It is *very* interesting...thank you for doing it
[13:51] Carmela Sandalwood: while I may disagree, it is fun to think about it all
[13:51] herman Bergson: It would be a very dull class when everyone agreed with everyone
[13:51] Carmela Sandalwood: quite true
[13:52] Carmela Sandalwood: I think the problem is ultimately what 'understand' means...do we actually have anything but an operational definition?
[13:52] herman Bergson: The main goal of philosophy is not to get the right answers...
[13:52] herman Bergson: as you give an example Carmela...philosophy is about asking the right questions
[13:53] Carmela Sandalwood: *smiles*
[13:53] Mistyowl Warrhol: Or asking wrong questions, to get a different view?
[13:53] Carmela Sandalwood: and exploring the possible answers
[13:53] Mistyowl Warrhol: :-)
[13:53] herman Bergson: right...
[13:54] herman Bergson: When you take the Chinese Room argument for instance....
[13:54] Mistyowl Warrhol: Ok, I did try to be good and not overload everyone circuits LOL
[13:54] Carmela Sandalwood: it seems that often philosophy is about figuring out what the 'correct' definitions actually are
[13:54] herman Bergson: Look it up in the Internet Encyclopedia of pHilosophy....
[13:55] herman Bergson: Searle himself mentioned a number of more than 200 counter arguments to it..
[13:55] herman Bergson: Yet I think it is a pretty convincing argument that shows that computers can not be identical with our brain...
[13:56] herman Bergson: No semantics , no awareness, no consciousness....
[13:56] Carmela Sandalwood: and I think it misses some crucial aspects of both how computers work and how understanding happens
[13:56] Carmela Sandalwood: but semantics can be syntactical
[13:56] herman Bergson: Despite all science fiction computer minds
[13:56] Carmela Sandalwood: in an environment
[13:57] herman Bergson: no Carmela....
[13:57] Mistyowl Warrhol: But it is a fun idea to think about.. Computers gaining knowledge on their own.
[13:57] Carmela Sandalwood: My guess is that AI will happen when we program robots to change internal states appropriately
[13:57] herman Bergson: No computer will ever "know" the truth value of any complex symbol
[13:57] Carmela Sandalwood: why not? and why do we?
[13:58] herman Bergson: That is the big question indeed
[13:58] herman Bergson: Let's postpone the answer to further lectures ㋡
[13:58] Sybyle Perdide: oh
[13:58] Sybyle Perdide: the suspense was growing
[13:59] Carmela Sandalwood: *smiles*...sounds good
[13:59] Sybyle Perdide: and now that cliffhanger
[13:59] herman Bergson: Then I have time to set up my Chrismas tree and snow here ^_^
[13:59] Sybyle Perdide: laughs
[13:59] Sybyle Perdide: thats an argument
[13:59] Carmela Sandalwood: hopefully my schedule will be nice and I'll be able to attend
[13:59] herman Bergson: You are always welcome Carmela ㋡
[14:00] Carmela Sandalwood: as long as RL complies ;)
[14:00] Carmela Sandalwood: thank you for the discussion
[14:00] Sybyle Perdide: I wish you a nice evening
[14:00] Mistyowl Warrhol: Yes, the other part of the equation. RL !!!
[14:00] Sybyle Perdide: see you on tuesday :)
[14:00] herman Bergson: My pleasure Carmela ㋡

Enhanced by Zemanta

Sunday, December 4, 2011

365: The Mind is a computer

Many people who work in the cognitive science and in the philosophy of mind think that the most exciting idea of the past generation, indeed of the past two thousand years, is that the mind is a computer program.

Specifically, the idea is that the mind is to the brain as a computer program is to the computer hardware. John Searle has baptized this view as "Strong Artificial Intelligence".

John Searle, born in 1932 and still alive and active, is noted for his contributions to the philosophy of language, philosophy of mind and social philosophy and teached at Berkeley in 1959.

In the previous lecture we learnt that a computer is a symbol manipulating machine and in the real computers of today, the machine uses only the symbols "1" and "0".

When a computer has to solve a problem, it uses an algorithm. An algorithm is a systematic procedure for solving a problem in a finite number of steps.

This all is controlled by a set of rules. For example you can have the rule "If condition C , then do A", which could be in real "If complex symbol "1111" occurs, replace it by "0000".

Now suppose we put a computer with a specific program on subject X in an room and an expert on the subject X in another room.

Then we let experts outside the room type in questions on a console. Both, the computer and the man in the room, can answer the questions.

This test, named the Turing Test, claims, that if the experts, who ask the questions, cannot distinguish the behavior of the computer from that of a human, then the computer has the same cognitive abilities as a human.

It would mean that the computer is as good as the human expert on subject X. Or in other words, the computer does the same as the mind of the human expert, understanding the questions and answering them correctly.

This is odd. A computer is a symbol manipulation device according to a given set of rules. Whatever the symbols means, doesn't matter. If you use the proper algorithm you get the solution of any problem.

Is that how our mind works? Is it indeed like a computer program a symbol manipulating system? This question has raised a battle in the philosophy of mind due to the famous Chinese Room argument as formulated by John Searle.

It is like this: you sit in a room with a bunch of boxes in which you find cards with Chinese characters on them. You have no understanding of Chinese at all.

But you have a book with rules, telling you things like "when you receive symbol X and Y, then return as an answer symbol P from box 2"

Outside the room there are chinese speaking people who send you their questions. You use your book of rules and return the appropriate symbols, which show to be the correct answers.

It means, that you passed in principle the famous Turing test, but you would not thereby understand a single word of Chinese.

If you don't understand Chinese by using a book of rules and manipulating symbols, neither does any digital computer using its algorithm.

However, when asked a question in English, you do not get a set of complex symbols, nor do you look up a number of rules to manipulate them.

When I ask you whether the earth is flat or a sphere, you can give an answer, because the words 'earth', "flat' and "sphere' have a meaning. And your answer is based on empirical facts.

And that is what a computer never can achieve, adding meaning to the symbols it manipulates and in some respects that is one of its powers, to be absolutely mindless.


The Discussion

[13:23] herman Bergson: Thank you ㋡
[13:23] Sybyle Perdide: great
[13:23] Sybyle Perdide: as alway
[13:23] Sybyle Perdide: s
[13:23] Bejiita Imako: yes
[13:23] Bejiita Imako: ㋡
[13:23] Farv Hallison: Thank you, herman.
[13:24] herman Bergson: Conclusion....computers never can have a mind
[13:24] agnos: Thanks
[13:24] Bejiita Imako: ed thats how it is, everyone who have written a computer program can see that
[13:24] Farv Hallison: But both stories sidestep the issue of defining what the mind is.
[13:24] herman Bergson: no Farv...
[13:25] Bejiita Imako: simple things like playing music requires the computer to for each sequence of the song do a complex series of instructions every time and loop millions of times per second the same instruction over and over
[13:25] herman Bergson: A mind does more than a computer does...
[13:25] Bejiita Imako: it can never learn it by itself
[13:25] Elle (ellenilli.lavendel) is Offline
[13:25] herman Bergson: a computer only shuffles symbols around according a bunch of rules
[13:25] herman Bergson: a mind ads meaning to symbol....a mind has content
[13:25] agnos: But we seemed to have developed into having a mind
[13:25] Bejiita Imako: its like if we would never learn the notes but always have a paper to look at
[13:25] Farv Hallison: The rule might generate new computer programs.
[13:26] Mistyowl Warrhol is Online
[13:26] Farv Hallison: a dictionary can add meaning to words.
[13:26] herman Bergson: maybe..but they do the same as all computer programs…shuffle symbols around without any understanding
[13:26] Bejiita Imako: yes indeed
[13:27] herman Bergson: Yes..that is what we did with our mind
[13:27] herman Bergson: Just look how crude the translators work....even the best....
[13:27] Bejiita Imako: a computer cpu is just a bunch of millions of small sort of light switches that opens and closes in a specific way to the program code
[13:27] Farv Hallison: well, a computer could have a dictionary of meanings and even make new entries
[13:27] Bejiita Imako: because the cpu is designed in hardware so that a specific sequence of 1 and 0 will cause those switches to flip
[13:28] Bejiita Imako: its nothing more than that
[13:28] herman Bergson:a computer can have a database, Farv..but We have to fill it
[13:28] Sybyle Perdide: may I play advocata diaboli?
[13:28] Bejiita Imako: the compiler that translate the c or basic code must have knowledge about the basic construction of the cpu
[13:28] herman Bergson: sure Sybyle
[13:28] Bejiita Imako: thats how you program in assembler
[13:29] CONNIE Eichel: ^^
[13:29] Sybyle Perdide: the problem of the computer is, that the rules are only on a single dimension
[13:29] herman Bergson: hold on Bejiita....plz ㋡
[13:29] Bejiita Imako: then you must know registers address locations and everything about the basic hardware to communicate with the machine
[13:29] netty Keng is Offline
[13:29] Sybyle Perdide: the human pc has many layers of rules
[13:29] Sybyle Perdide: smelling, looking, feeling and so on
[13:29] Farv Hallison: I like the smelling
[13:29] bergfrau Apfelbaum is Online
[13:29] herman Bergson: Oh yes....I even left that feature our on purpose...
[13:30] Lizzy Pleides: our brain influences sympaticus and parasympaticus, it influences if you feel well or not, how can a computer feel well?, can he feel anger fear or love?
[13:30] Farv Hallison: The computer could have a chemical lab that acts like a nose.
[13:30] herman Bergson: No Farv....
[13:30] herman Bergson: unfortunately not...
[13:31] herman Bergson: the chemical lab produces only data as in put which are just symbols for the computer
[13:31] herman Bergson: then it has its algorithm to analyze the data
[13:31] Bejiita Imako: yes
[13:31] Netty Crystal is Online
[13:31] herman Bergson: you find such computers to analyze gasses for instance in many laboratories
[13:32] herman Bergson: But the machine has no understanding at all of the meaning of its output
[13:32] herman Bergson: It is our mind that adds the semantics to the charts and numbers
[13:32] Bejiita Imako: and basically a computer have only a specific set of fixed instructions it can understand, the compiler in for example c must translate the c code to these basic commands and thats all the commands the cpu will ever understand untill a new model arrives with more instruction sets
[13:33] Sybyle Perdide: but we don't know either, why an how we react on chemical signs
[13:33] Bejiita Imako: tats why a computer can never at least not as they work now feel or sense
[13:33] Sybyle Perdide: we just started to analyze it
[13:34] herman Bergson: Well the idea of a sensory computer with understanding of its sense experiences is a Science Fiction idea
[13:34] herman Bergson: Take Data form Startrek for instance...
[13:35] herman Bergson: The funny thing with him was that he could play Bach on a cello, but couldn't put feeling in it
[13:35] Farv Hallison: good point. I can't put feeling into it either.
[13:36] herman Bergson: So the scenario writers stayed close to the symbol shuffling of a computer
[13:36] herman Bergson: Data had a brother Farv.....
[13:36] herman Bergson: Looks curiously at Farv
[13:36] Farv Hallison: oh?
[13:36] Bejiita Imako: star trek data?
[13:36] herman Bergson: Yes
[13:36] Bejiita Imako: yes he had a brother
[13:36] herman Bergson: But that brother was the bad guy is one of the episodes
[13:36] Janette Shim is Offline
[13:36] Bejiita Imako: who was kind of evil programmed i think
[13:37] Bejiita Imako: yes
[13:37] Farv Hallison: Could the brother play with feeling?
[13:37] herman Bergson: Well....
[13:37] herman Bergson: that is a good question Farv....for that brother really wanted evil....
[13:37] Farv Hallison: Can you be evil without having evil feelings?
[13:37] herman Bergson: which is an emotional choice
[13:38] herman Bergson: It is always fun to see how in SF they have to struggle with a computer with a mind...
[13:38] herman Bergson: especially when the thing gets its own feelings and ideas
[13:38] Bejiita Imako: and thats also a thing, can you make a computer program so that it for some unforeseen reason turn against you like in terminator
[13:39] herman Bergson: That is way beyond what a computer really is and will be in the future
[13:39] Bejiita Imako: i don't think so cause then you must have deliberateley programmed it to kill you and who does that?
[13:39] Bejiita Imako: a computer only does what you tell it to
[13:40] Farv Hallison: well, the computer might control the power grid and give itself more power when it feel is it circuits slowing down.
[13:40] Joann Innovia (kimkiddy) is Offline
[13:40] Bejiita Imako: even if you can make a computer program take in data from outside and "learn" i dont' think that a machine that is made for good suddenly by external input could go berserk
[13:40] Bejiita Imako: and kill you
[13:40] herman Bergson: Yes Farv...that is what it in SF movies always does....
[13:41] herman Bergson: but it only can do so when programmed that way....
[13:41] Bejiita Imako: yes
[13:41] herman Bergson: Greatest fun is always when they in a movie never get the idea to simply pull the plug
[13:41] Bejiita Imako: hehe yes, thats rule nr one
[13:42] herman Bergson: weird thing is then when you approach the plug and outlet the computer attacks you :-)
[13:42] Farv Hallison: 2001 pulled the plug.
[13:42] Bejiita Imako: ALL machines no matter what it is should have an emergency stop or a mean to cut the power as soon it loose control
[13:42] herman Bergson: True Farv..indeed......he removes all those red objects one by one..
[13:43] Farv Hallison: but the computer might be running our life control system, so we can't shut it down.
[13:43] Bejiita Imako: but well i may bee hard to get to the plug of the opier machine if it chases you around the office meanwhile
[13:43] Bejiita Imako: lol
[13:43] Bejiita Imako: copying
[13:43] herman Bergson: lol Good one FArv...
[13:43] :: Beertje :: (beertje.beaumont): aardlekschakelaar:)
[13:43] herman Bergson: When we pull the plug here SL ceases to exist and we all are gone....:-((
[13:44] CONNIE Eichel: yes :/
[13:44] herman Bergson: So, we are defenseless against our computers!!!!!
[13:44] Farv Hallison: yes, the police might pull the plug if we start to demonstrate.
[13:44] herman Bergson: We are all trapped inhere!!!!!
[13:44] CONNIE Eichel: hehe
[13:44] Farv Hallison: We are trapped in the Matrix/
[13:44] herman Bergson: Yeah..Let's OCCUPY SL !!!!!
[13:45] :: Beertje :: (beertje.beaumont): lol
[13:45] Bejiita Imako: well we still exist as code but the code need an active cpu to run so you can say we are like viruses in sl, a virus ( biological) needs a living host, its just a bunch of dna as our avatars just is code that need a powered on cpu and memory to operate on
[13:45] herman Bergson: Well...thank you all for your participation again...
[13:45] Lizzy Pleides: thanks to YOU herman
[13:45] Farv Hallison: this has been great fun.
[13:45] Bejiita Imako: yeah
[13:45] Bejiita Imako: really nice
[13:45] Sybyle Perdide: yes
[13:45] Guestboook van tipjar stand: Lizzy Pleides donated L$50. Thank you very much, it is much appreciated!
[13:45] Bejiita Imako: \o/
[13:45] Bejiita Imako: || Hoooo!
[13:45] Bejiita Imako: / \
[13:46] Bejiita Imako: ㋡
[13:46] herman Bergson: a painful observation that we are trappe din here and cant pull the plug unless we want to kill ourselves...
[[13:46] Farv Hallison: ㋡
[13:46] herman Bergson: I hope you all can live with that ㋡
[13:46] Bejiita Imako: but behind the avatar is still a real person who control it
[13:46] CONNIE Eichel: hehe
[13:46] Bejiita Imako: my avatar doesn't do anything my rl self don't tell it to
[13:46] Bejiita Imako: its operator
[13:46] herman Bergson: That real person might survive then Bejiita...
[
[13:47] Bejiita Imako: its an interesting thought for sure
[13:47] herman Bergson: Thank you all and dont be afraid of thinking computers..they dont exist
[13:47] Bejiita Imako: just machines
[13:47] CONNIE Eichel: :)
[13:47] Farv Hallison: I wont do anything my tail wouldn't do.
[13:48] Bejiita Imako: and machines only do what you tell them, unless some dangerous bug is in the code
[13:48] herman Bergson: Class dismissed ㋡
[13:48] :: Beertje :: (beertje.beaumont): *•.¸'*•.¸ ♥ ¸.•*´¸.•*
[13:48] :: Beertje :: (beertje.beaumont): Goed Gedaan Jochie!!
[13:48] :: Beertje :: (beertje.beaumont): .•*♥¨`• BRAVO!!!! •¨`♥*•.
[13:48] :: Beertje :: (beertje.beaumont): ¸.•*`¸.•*´ ♥ `*•.¸`*•.¸
[13:48] Bejiita Imako: and that bug is then telling the machine to do wring things
[13:48] Farv Hallison: All code has bugs
[13:48] Lizzy Pleides: clap clap clap...wohooooooo!
[13:48] CONNIE Eichel: great class, as always :)
[13:48] Bejiita Imako: Hooo!!!
[13:48] Bejiita Imako: Hoooo!
[13:48] Bejiita Imako: yeah
[13:48] herman Bergson: smiles
[13:48] Bejiita Imako: liked it a slot
[13:48] Bejiita Imako: ㋡
[13:48] Bejiita Imako: ok cu all
[13:48] herman Bergson: thank you ㋡
[13:48] Bejiita Imako: lot
[13:49] Farv Hallison: bye Bejita
[13:49] Sybyle Perdide: bye Bejita
[13:49] Bejiita Imako: cu soon
[13:49] CONNIE Eichel: bye bye, see you next class :)
[13:49] Bejiita Imako: ㋡
[13:49] herman Bergson whispers: Bye CONNIE
[13:49] Lizzy Pleides: Tc Connie
[13:49] Farv Hallison: bye Connie
[13:49] Sybyle Perdide: ciao Connie
[13:49] CONNIE Eichel: bye :)


Enhanced by Zemanta