Countering the Argument with Thorsten

  • 96 Replies
  • 178497 Views

0 Members and 3 Guests are viewing this topic.

sciborg2

  • *
  • Old Name
  • *****
  • Contrarian Wanker
  • Posts: 1173
  • "Trickster Makes This World"
    • View Profile
« Reply #75 on: December 01, 2018, 08:53:36 pm »
I think if you want to counter the argument you have start with the underlying questions that precede the mind/body question(s).

I mean I think those [mind/body] questions play a role, but I feel that trying to play the game the way Libertarians & Compatibilists do - accept just about all assumptions of the Reductionist/Determinist then either look for miracles or play semantic games - leaves one destined to lose.

Questions about time, substance, causality - these likely have more promise than initial focus of the human level.
« Last Edit: December 02, 2018, 12:25:26 am by sciborg2 »

sciborg2

  • *
  • Old Name
  • *****
  • Contrarian Wanker
  • Posts: 1173
  • "Trickster Makes This World"
    • View Profile
« Reply #76 on: January 10, 2019, 11:42:12 pm »

TaoHorror

  • *
  • Old Name
  • *****
  • Posts: 1152
  • whore
    • View Profile
« Reply #77 on: January 11, 2019, 03:18:19 am »
Think I posted these Tallis articles long ago, not sure though:

What Neuroscience Cannot Tell Us About Ourselves

How Can I Possibly Be Free? - Why the neuroscientific case against free will is wrong

You're an ace, Sci - haven't read it all yet, about to hit the hay, but wanted to say thank you.
It's me, Dave, open up, I've got the stuff

sciborg2

  • *
  • Old Name
  • *****
  • Contrarian Wanker
  • Posts: 1173
  • "Trickster Makes This World"
    • View Profile
« Reply #78 on: January 11, 2019, 04:40:28 am »
Think I posted these Tallis articles long ago, not sure though:

What Neuroscience Cannot Tell Us About Ourselves

How Can I Possibly Be Free? - Why the neuroscientific case against free will is wrong

You're an ace, Sci - haven't read it all yet, about to hit the hay, but wanted to say thank you.

Heh, figured I could potentially save you from buying the damn book if you read these and got what you needed...or thought he was a fucking idiot.

:-)

TLEILAXU

  • *
  • Old Name
  • *****
  • Exalt-Smiter of Theories
  • Posts: 731
    • View Profile
« Reply #79 on: January 13, 2019, 03:48:08 pm »
Those are some long ass articles so I didn't read nearly all of it, just going to post some quotes and comment on them.

From the first article.
Quote
Ironically, by locating consciousness in particular parts of the material of the brain, neuroscientism actually underlines this mystery of intentionality, opening up a literal, physical space between conscious experiences and that which they are about. This physical space is, paradoxically, both underlined and annulled: The gap between the glass of which you are aware and the neural impulses that are supposed to be your awareness of it is both a spatial gap and a non-spatial gap. The nerve impulses inside your cranium are six feet away from the glass, and yet, if the nerve impulses reach out or refer to the glass, as it were, they do so by having the glass “inside” them. The task of attempting to express the conceptual space of intentionality in purely physical terms is a dizzying one. The perception of the glass inherently is of the glass, whereas the associated neural activity exists apart from the cause of the light bouncing off the glass. This also means, incidentally, that the neural activity could exist due to a different cause. For example, you could have the same experience of the glass, even if the glass were not present, by tickling the relevant neurons. The resulting perception will be mistaken, because it is of an object that is not in fact physically present before you. But it would be ludicrous to talk of the associated neural activity as itself mistaken; neural activity is not about anything and so can be neither correct nor mistaken.
Isn't this essentially a God of the Gaps argument? Just because we cannot describe this mental representation in neuroscientific terms it does not necessarily follow that there is some ontological difference between that separates human consciousness from the rest of the universe.

From the second article; he keeps going with the intentionality argument. 
Quote
The case for determinism will prevail over the case for freedom so long as we look for freedom in a world devoid of the first-person understanding — and so we will have to reacquaint ourselves with the perspective that comes most naturally to us. Recall that, if we are to be correct in our intuition that we are free, the issue of whether or not we are the origin of our actions is central. Seen as pieces of the material world, we appear to be stitched into a boundless causal net extending from the beginning of time through eternity. How on earth can we then be points of origin? We seem to be a sensory input linked to motor output, with nothing much different in between. So how on earth can the actor truly initiate anything? How can he say that the act in a very important sense begins with him, that he owns it and is accountable for it — that “The buck starts here”?

The key to this ownership lies in intentionality. This is not to be confused with intentions, the purposes of actions. “Intentionality” designates the way that we are conscious of something, and that the contents of our consciousness are thus about something. Intentionality, in its fully developed form, is unique to human beings, who alone are fully-fledged subjects explicitly related to objects. It is the seed of the self and of freedom. It is, as of now, entirely mysterious — which is not to say that it is supernatural or in principle beyond our understanding, but rather that it cannot be explained entirely in terms of the processes and laws that operate in the material world. Its relevance here is that it is the beginning of the process by which human beings transcend the material world, without losing contact with it. Human freedom begins with this about-ness of human consciousness.
Again, I cannot see it any other way than a God of the Gaps. It is clever because it's very hard to argue against a mode of reasoning from a 'scientific' perspective, but if you flip things around and instead of asking why should consciousness be 'reducible' to 'science', why should it not? It is known and uncontroversial that we share the same basic charactistics as every other living thing on earth. Our basic metabolic pathways are more or less identical to the basic metabolic pathways in E. coli, our macromolecules are made out of the same monomers. Are we truly different or are we, ironically due to our 'hardwiring', not so different, but inclined to think so because of some sort of anthropo-centric intentional thought process?

Also, regarding intentionalism, Bakker has like 1000 blogposts about that stuff.

sciborg2

  • *
  • Old Name
  • *****
  • Contrarian Wanker
  • Posts: 1173
  • "Trickster Makes This World"
    • View Profile
« Reply #80 on: January 13, 2019, 06:28:55 pm »
Those are some long ass articles so I didn't read nearly all of it, just going to post some quotes and comment on them.

From the first article.
Quote
Ironically, by locating consciousness in particular parts of the material of the brain, neuroscientism actually underlines this mystery of intentionality, opening up a literal, physical space between conscious experiences and that which they are about. This physical space is, paradoxically, both underlined and annulled: The gap between the glass of which you are aware and the neural impulses that are supposed to be your awareness of it is both a spatial gap and a non-spatial gap. The nerve impulses inside your cranium are six feet away from the glass, and yet, if the nerve impulses reach out or refer to the glass, as it were, they do so by having the glass “inside” them. The task of attempting to express the conceptual space of intentionality in purely physical terms is a dizzying one. The perception of the glass inherently is of the glass, whereas the associated neural activity exists apart from the cause of the light bouncing off the glass. This also means, incidentally, that the neural activity could exist due to a different cause. For example, you could have the same experience of the glass, even if the glass were not present, by tickling the relevant neurons. The resulting perception will be mistaken, because it is of an object that is not in fact physically present before you. But it would be ludicrous to talk of the associated neural activity as itself mistaken; neural activity is not about anything and so can be neither correct nor mistaken.
Isn't this essentially a God of the Gaps argument? Just because we cannot describe this mental representation in neuroscientific terms it does not necessarily follow that there is some ontological difference between that separates human consciousness from the rest of the universe.

From the second article; he keeps going with the intentionality argument. 
Quote
The case for determinism will prevail over the case for freedom so long as we look for freedom in a world devoid of the first-person understanding — and so we will have to reacquaint ourselves with the perspective that comes most naturally to us. Recall that, if we are to be correct in our intuition that we are free, the issue of whether or not we are the origin of our actions is central. Seen as pieces of the material world, we appear to be stitched into a boundless causal net extending from the beginning of time through eternity. How on earth can we then be points of origin? We seem to be a sensory input linked to motor output, with nothing much different in between. So how on earth can the actor truly initiate anything? How can he say that the act in a very important sense begins with him, that he owns it and is accountable for it — that “The buck starts here”?

The key to this ownership lies in intentionality. This is not to be confused with intentions, the purposes of actions. “Intentionality” designates the way that we are conscious of something, and that the contents of our consciousness are thus about something. Intentionality, in its fully developed form, is unique to human beings, who alone are fully-fledged subjects explicitly related to objects. It is the seed of the self and of freedom. It is, as of now, entirely mysterious — which is not to say that it is supernatural or in principle beyond our understanding, but rather that it cannot be explained entirely in terms of the processes and laws that operate in the material world. Its relevance here is that it is the beginning of the process by which human beings transcend the material world, without losing contact with it. Human freedom begins with this about-ness of human consciousness.
Again, I cannot see it any other way than a God of the Gaps. It is clever because it's very hard to argue against a mode of reasoning from a 'scientific' perspective, but if you flip things around and instead of asking why should consciousness be 'reducible' to 'science', why should it not? It is known and uncontroversial that we share the same basic charactistics as every other living thing on earth. Our basic metabolic pathways are more or less identical to the basic metabolic pathways in E. coli, our macromolecules are made out of the same monomers. Are we truly different or are we, ironically due to our 'hardwiring', not so different, but inclined to think so because of some sort of anthropo-centric intentional thought process?

Also, regarding intentionalism, Bakker has like 1000 blogposts about that stuff.

Hmmm...to me a gaps argument takes advantage of a gap as the crux of its argument. I think this is different than a metaphysical demonstration that starting with assumptions like no mental character in matter leads to the conclusion that this kind of materialism has to be false?

I've gone through some of Bakker's stuff, and eliminativism did seem like a live possibility but then I read Alex Rosenberg's stuff about Intentionality in Atheist's Guide to Reality where he says we simply have to be wrong about having thoughts:

Quote
"A more general version of this question is this: How can one clump of stuff anywhere in the universe be about some other clump of stuff anywhere else in the universe—right next to it or 100 million light-years away?

...Let’s suppose that the Paris neurons are about Paris the same way red octagons are about stopping. This is the first step down a slippery slope, a regress into total confusion. If the Paris neurons are about Paris the same way a red octagon is about stopping, then there has to be something in the brain that interprets the Paris neurons as being about Paris. After all, that’s how the stop sign is about stopping. It gets interpreted by us in a certain way. The difference is that in the case of the Paris neurons, the interpreter can only be another part of the brain...

What we need to get off the regress is some set of neurons that is about some stuff outside the brain without being interpreted—by anyone or anything else (including any other part of the brain)—as being about that stuff outside the brain. What we need is a clump of matter, in this case the Paris neurons, that by the very arrangement of its synapses points at, indicates, singles out, picks out, identifies (and here we just start piling up more and more synonyms for “being about”) another clump of matter outside the brain. But there is no such physical stuff.

Physics has ruled out the existence of clumps of matter of the required sort...

…What you absolutely cannot be wrong about is that your conscious thought was about something. Even having a wildly wrong thought about something requires that the thought be about something.

It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all...When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong."

The idea we don't have thoughts about things, Intentionality....it seems to me the correct conclusion is materialism is false not that Cogito Ergo Sum is a mistake.

Long ago I did ask Bakker about this, but I don't think I fully understood his answer. I should ask him again but I need to read my copy of Philosophical Foundations of Neuroscience so I don't completely embarrass myself.

Regarding our similarity to other organisms...I mean bees apparently understand the concept of Zero so perhaps mentality goes down further than we think, maybe even as deep as the panpsychics suggest.  ;)

TLEILAXU

  • *
  • Old Name
  • *****
  • Exalt-Smiter of Theories
  • Posts: 731
    • View Profile
« Reply #81 on: January 14, 2019, 05:06:15 pm »
I've gone through some of Bakker's stuff, and eliminativism did seem like a live possibility but then I read Alex Rosenberg's stuff about Intentionality in Atheist's Guide to Reality where he says we simply have to be wrong about having thoughts:

Quote
"A more general version of this question is this: How can one clump of stuff anywhere in the universe be about some other clump of stuff anywhere else in the universe—right next to it or 100 million light-years away?

...Let’s suppose that the Paris neurons are about Paris the same way red octagons are about stopping. This is the first step down a slippery slope, a regress into total confusion. If the Paris neurons are about Paris the same way a red octagon is about stopping, then there has to be something in the brain that interprets the Paris neurons as being about Paris. After all, that’s how the stop sign is about stopping. It gets interpreted by us in a certain way. The difference is that in the case of the Paris neurons, the interpreter can only be another part of the brain...

What we need to get off the regress is some set of neurons that is about some stuff outside the brain without being interpreted—by anyone or anything else (including any other part of the brain)—as being about that stuff outside the brain. What we need is a clump of matter, in this case the Paris neurons, that by the very arrangement of its synapses points at, indicates, singles out, picks out, identifies (and here we just start piling up more and more synonyms for “being about”) another clump of matter outside the brain. But there is no such physical stuff.

Physics has ruled out the existence of clumps of matter of the required sort...

…What you absolutely cannot be wrong about is that your conscious thought was about something. Even having a wildly wrong thought about something requires that the thought be about something.

It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all...When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong."

The idea we don't have thoughts about things, Intentionality....it seems to me the correct conclusion is materialism is false not that Cogito Ergo Sum is a mistake.
I don't understand the argument. Couldn't you just as easily make an analogy consisting of say, a robot with a camera? The camera takes as input photons from the surroundings and creates an output consisting of an array of pixels or something upon which further computations are then done in order to make some decision according to some goal function. There's no infinite regress here. Generally I don't like comparing a human brain with a piece of software but I think this is one case where the analogy makes sense, except you have a lot of higher order representations, computations etc. going on because you literally have like a trillion interconnected cells.

Regarding our similarity to other organisms...I mean bees apparently understand the concept of Zero so perhaps mentality goes down further than we think, maybe even as deep as the panpsychics suggest.  ;)

sciborg2

  • *
  • Old Name
  • *****
  • Contrarian Wanker
  • Posts: 1173
  • "Trickster Makes This World"
    • View Profile
« Reply #82 on: January 14, 2019, 07:30:07 pm »
I've gone through some of Bakker's stuff, and eliminativism did seem like a live possibility but then I read Alex Rosenberg's stuff about Intentionality in Atheist's Guide to Reality where he says we simply have to be wrong about having thoughts:

Quote
"A more general version of this question is this: How can one clump of stuff anywhere in the universe be about some other clump of stuff anywhere else in the universe—right next to it or 100 million light-years away?

...Let’s suppose that the Paris neurons are about Paris the same way red octagons are about stopping. This is the first step down a slippery slope, a regress into total confusion. If the Paris neurons are about Paris the same way a red octagon is about stopping, then there has to be something in the brain that interprets the Paris neurons as being about Paris. After all, that’s how the stop sign is about stopping. It gets interpreted by us in a certain way. The difference is that in the case of the Paris neurons, the interpreter can only be another part of the brain...

What we need to get off the regress is some set of neurons that is about some stuff outside the brain without being interpreted—by anyone or anything else (including any other part of the brain)—as being about that stuff outside the brain. What we need is a clump of matter, in this case the Paris neurons, that by the very arrangement of its synapses points at, indicates, singles out, picks out, identifies (and here we just start piling up more and more synonyms for “being about”) another clump of matter outside the brain. But there is no such physical stuff.

Physics has ruled out the existence of clumps of matter of the required sort...

…What you absolutely cannot be wrong about is that your conscious thought was about something. Even having a wildly wrong thought about something requires that the thought be about something.

It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all...When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong."

The idea we don't have thoughts about things, Intentionality....it seems to me the correct conclusion is materialism is false not that Cogito Ergo Sum is a mistake.
I don't understand the argument. Couldn't you just as easily make an analogy consisting of say, a robot with a camera? The camera takes as input photons from the surroundings and creates an output consisting of an array of pixels or something upon which further computations are then done in order to make some decision according to some goal function. There's no infinite regress here. Generally I don't like comparing a human brain with a piece of software but I think this is one case where the analogy makes sense, except you have a lot of higher order representations, computations etc. going on because you literally have like a trillion interconnected cells.

Regarding our similarity to other organisms...I mean bees apparently understand the concept of Zero so perhaps mentality goes down further than we think, maybe even as deep as the panpsychics suggest.  ;)


Lol at the dog - but re: software...isn't this just an instantiation of a Turing Machine, in which case the calculations only have the meaning we give?

I mean any bit string can be interpreted differently, which is not to say every string of 0's and 1's can be every program imaginable but at the very least it seems any such string can represent a countably infinite number of programs?

I guess I don't see much difference between a computer and an abacus in terms of holding some aboutness in the material?

Jabberwock03

  • *
  • Suthenti
  • *
  • Posts: 54
    • View Profile
« Reply #83 on: November 23, 2019, 12:58:49 pm »
I've gone through some of Bakker's stuff, and eliminativism did seem like a live possibility but then I read Alex Rosenberg's stuff about Intentionality in Atheist's Guide to Reality where he says we simply have to be wrong about having thoughts:

Quote
"A more general version of this question is this: How can one clump of stuff anywhere in the universe be about some other clump of stuff anywhere else in the universe—right next to it or 100 million light-years away?

...Let’s suppose that the Paris neurons are about Paris the same way red octagons are about stopping. This is the first step down a slippery slope, a regress into total confusion. If the Paris neurons are about Paris the same way a red octagon is about stopping, then there has to be something in the brain that interprets the Paris neurons as being about Paris. After all, that’s how the stop sign is about stopping. It gets interpreted by us in a certain way. The difference is that in the case of the Paris neurons, the interpreter can only be another part of the brain...

What we need to get off the regress is some set of neurons that is about some stuff outside the brain without being interpreted—by anyone or anything else (including any other part of the brain)—as being about that stuff outside the brain. What we need is a clump of matter, in this case the Paris neurons, that by the very arrangement of its synapses points at, indicates, singles out, picks out, identifies (and here we just start piling up more and more synonyms for “being about”) another clump of matter outside the brain. But there is no such physical stuff.

Physics has ruled out the existence of clumps of matter of the required sort...

…What you absolutely cannot be wrong about is that your conscious thought was about something. Even having a wildly wrong thought about something requires that the thought be about something.

It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all...When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong."

The idea we don't have thoughts about things, Intentionality....it seems to me the correct conclusion is materialism is false not that Cogito Ergo Sum is a mistake.
I don't understand the argument. Couldn't you just as easily make an analogy consisting of say, a robot with a camera? The camera takes as input photons from the surroundings and creates an output consisting of an array of pixels or something upon which further computations are then done in order to make some decision according to some goal function. There's no infinite regress here. Generally I don't like comparing a human brain with a piece of software but I think this is one case where the analogy makes sense, except you have a lot of higher order representations, computations etc. going on because you literally have like a trillion interconnected cells.

Regarding our similarity to other organisms...I mean bees apparently understand the concept of Zero so perhaps mentality goes down further than we think, maybe even as deep as the panpsychics suggest.  ;)


Lol at the dog - but re: software...isn't this just an instantiation of a Turing Machine, in which case the calculations only have the meaning we give?

I mean any bit string can be interpreted differently, which is not to say every string of 0's and 1's can be every program imaginable but at the very least it seems any such string can represent a countably infinite number of programs?

I guess I don't see much difference between a computer and an abacus in terms of holding some aboutness in the material?

If we consider that a human brain can be (even just vaguely) associated to a computer (and personaly I think the association make sens in this case), you don't need to give meaning to the calculation.

A software receive one (or many) input and return an output. The output is the "physical" manifestation of the calculation, independently of the "meaning".
We can think of the brain the same, we have stimulus/inputs through our senses, and we output some actions.

I don't see how the fact that our brain is freaking complicated, and that its internal trillions of neuronal activities/"operations" per seconds trick itself into "consciousness" change anything or counter the Argument.

But in the end, philosophy won't explain anything, all we can do is wait for science to give an answer. We can speculate but it's just that, speculation.

sciborg2

  • *
  • Old Name
  • *****
  • Contrarian Wanker
  • Posts: 1173
  • "Trickster Makes This World"
    • View Profile
« Reply #84 on: December 08, 2019, 05:43:54 am »
I've gone through some of Bakker's stuff, and eliminativism did seem like a live possibility but then I read Alex Rosenberg's stuff about Intentionality in Atheist's Guide to Reality where he says we simply have to be wrong about having thoughts:

Quote
"A more general version of this question is this: How can one clump of stuff anywhere in the universe be about some other clump of stuff anywhere else in the universe—right next to it or 100 million light-years away?

...Let’s suppose that the Paris neurons are about Paris the same way red octagons are about stopping. This is the first step down a slippery slope, a regress into total confusion. If the Paris neurons are about Paris the same way a red octagon is about stopping, then there has to be something in the brain that interprets the Paris neurons as being about Paris. After all, that’s how the stop sign is about stopping. It gets interpreted by us in a certain way. The difference is that in the case of the Paris neurons, the interpreter can only be another part of the brain...

What we need to get off the regress is some set of neurons that is about some stuff outside the brain without being interpreted—by anyone or anything else (including any other part of the brain)—as being about that stuff outside the brain. What we need is a clump of matter, in this case the Paris neurons, that by the very arrangement of its synapses points at, indicates, singles out, picks out, identifies (and here we just start piling up more and more synonyms for “being about”) another clump of matter outside the brain. But there is no such physical stuff.

Physics has ruled out the existence of clumps of matter of the required sort...

…What you absolutely cannot be wrong about is that your conscious thought was about something. Even having a wildly wrong thought about something requires that the thought be about something.

It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all...When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong."

The idea we don't have thoughts about things, Intentionality....it seems to me the correct conclusion is materialism is false not that Cogito Ergo Sum is a mistake.
I don't understand the argument. Couldn't you just as easily make an analogy consisting of say, a robot with a camera? The camera takes as input photons from the surroundings and creates an output consisting of an array of pixels or something upon which further computations are then done in order to make some decision according to some goal function. There's no infinite regress here. Generally I don't like comparing a human brain with a piece of software but I think this is one case where the analogy makes sense, except you have a lot of higher order representations, computations etc. going on because you literally have like a trillion interconnected cells.

Regarding our similarity to other organisms...I mean bees apparently understand the concept of Zero so perhaps mentality goes down further than we think, maybe even as deep as the panpsychics suggest.  ;)


Lol at the dog - but re: software...isn't this just an instantiation of a Turing Machine, in which case the calculations only have the meaning we give?

I mean any bit string can be interpreted differently, which is not to say every string of 0's and 1's can be every program imaginable but at the very least it seems any such string can represent a countably infinite number of programs?

I guess I don't see much difference between a computer and an abacus in terms of holding some aboutness in the material?

If we consider that a human brain can be (even just vaguely) associated to a computer (and personaly I think the association make sens in this case), you don't need to give meaning to the calculation.

A software receive one (or many) input and return an output. The output is the "physical" manifestation of the calculation, independently of the "meaning".
We can think of the brain the same, we have stimulus/inputs through our senses, and we output some actions.

I don't see how the fact that our brain is freaking complicated, and that its internal trillions of neuronal activities/"operations" per seconds trick itself into "consciousness" change anything or counter the Argument.

But in the end, philosophy won't explain anything, all we can do is wait for science to give an answer. We can speculate but it's just that, speculation.

Except you do have thoughts about things, which is where the meaning question comes from as it refers to  Aboutness of Thought (what Bakker & other philosophers call Intentionality) which Bakker thinks can be reduced to a physics explanation (matter, energy, forces, etc).

To me Eliminativism toward Intentionality is the central point of Bakker's BBT, so everything turns on this issue. So because the correctness of a program depends on the intention of the programmer - it's the only way to tell the difference between an accidental bug and deliberate sabotage - I'd say programs cannot explain away Intentionality.

Admittedly there are other issues at play, like the nature of causation and mental causation, but given that we use Intentionaltiy to find interest relative causal chains it's probably a good starting point to refute the Argument. Which is - IIRC - all I was getting at there.

As for whether science can decide this issue...I'd agree with you if you're talking about something like a revised version of the Libet type experiements (the current set apparently got debunked), but not if we're talking about eliminativism of Intentionality. I just don't see how someone can do anything but find correlations since -as per above- finding a causal explanation for Intentionality would require Intentionality.

Jabberwock03

  • *
  • Suthenti
  • *
  • Posts: 54
    • View Profile
« Reply #85 on: December 08, 2019, 11:02:36 am »
I've gone through some of Bakker's stuff, and eliminativism did seem like a live possibility but then I read Alex Rosenberg's stuff about Intentionality in Atheist's Guide to Reality where he says we simply have to be wrong about having thoughts:

Quote
"A more general version of this question is this: How can one clump of stuff anywhere in the universe be about some other clump of stuff anywhere else in the universe—right next to it or 100 million light-years away?

...Let’s suppose that the Paris neurons are about Paris the same way red octagons are about stopping. This is the first step down a slippery slope, a regress into total confusion. If the Paris neurons are about Paris the same way a red octagon is about stopping, then there has to be something in the brain that interprets the Paris neurons as being about Paris. After all, that’s how the stop sign is about stopping. It gets interpreted by us in a certain way. The difference is that in the case of the Paris neurons, the interpreter can only be another part of the brain...

What we need to get off the regress is some set of neurons that is about some stuff outside the brain without being interpreted—by anyone or anything else (including any other part of the brain)—as being about that stuff outside the brain. What we need is a clump of matter, in this case the Paris neurons, that by the very arrangement of its synapses points at, indicates, singles out, picks out, identifies (and here we just start piling up more and more synonyms for “being about”) another clump of matter outside the brain. But there is no such physical stuff.

Physics has ruled out the existence of clumps of matter of the required sort...

…What you absolutely cannot be wrong about is that your conscious thought was about something. Even having a wildly wrong thought about something requires that the thought be about something.

It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all...When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong."

The idea we don't have thoughts about things, Intentionality....it seems to me the correct conclusion is materialism is false not that Cogito Ergo Sum is a mistake.
I don't understand the argument. Couldn't you just as easily make an analogy consisting of say, a robot with a camera? The camera takes as input photons from the surroundings and creates an output consisting of an array of pixels or something upon which further computations are then done in order to make some decision according to some goal function. There's no infinite regress here. Generally I don't like comparing a human brain with a piece of software but I think this is one case where the analogy makes sense, except you have a lot of higher order representations, computations etc. going on because you literally have like a trillion interconnected cells.

Regarding our similarity to other organisms...I mean bees apparently understand the concept of Zero so perhaps mentality goes down further than we think, maybe even as deep as the panpsychics suggest.  ;)


Lol at the dog - but re: software...isn't this just an instantiation of a Turing Machine, in which case the calculations only have the meaning we give?

I mean any bit string can be interpreted differently, which is not to say every string of 0's and 1's can be every program imaginable but at the very least it seems any such string can represent a countably infinite number of programs?

I guess I don't see much difference between a computer and an abacus in terms of holding some aboutness in the material?

If we consider that a human brain can be (even just vaguely) associated to a computer (and personaly I think the association make sens in this case), you don't need to give meaning to the calculation.

A software receive one (or many) input and return an output. The output is the "physical" manifestation of the calculation, independently of the "meaning".
We can think of the brain the same, we have stimulus/inputs through our senses, and we output some actions.

I don't see how the fact that our brain is freaking complicated, and that its internal trillions of neuronal activities/"operations" per seconds trick itself into "consciousness" change anything or counter the Argument.

But in the end, philosophy won't explain anything, all we can do is wait for science to give an answer. We can speculate but it's just that, speculation.

Except you do have thoughts about things, which is where the meaning question comes from as it refers to  Aboutness of Thought (what Bakker & other philosophers call Intentionality) which Bakker thinks can be reduced to a physics explanation (matter, energy, forces, etc).

To me Eliminativism toward Intentionality is the central point of Bakker's BBT, so everything turns on this issue. So because the correctness of a program depends on the intention of the programmer - it's the only way to tell the difference between an accidental bug and deliberate sabotage - I'd say programs cannot explain away Intentionality.

Admittedly there are other issues at play, like the nature of causation and mental causation, but given that we use Intentionaltiy to find interest relative causal chains it's probably a good starting point to refute the Argument. Which is - IIRC - all I was getting at there.

As for whether science can decide this issue...I'd agree with you if you're talking about something like a revised version of the Libet type experiements (the current set apparently got debunked), but not if we're talking about eliminativism of Intentionality. I just don't see how someone can do anything but find correlations since -as per above- finding a causal explanation for Intentionality would require Intentionality.

>So because the correctness of a program depends on the intention of the programmer - it's the only way to tell the difference between an accidental bug and deliberate sabotage - I'd say programs cannot explain away Intentionality.

That why analogies are just anologies and nothing more.
The correctness of a program isn't the issue here, first because even a bugged/sabotaged program will get inputs and return outputs independently of the original intention. Then because we are obviously not programs but complex random evolved chimestry, and not programmed to do something specific by someone else.

So in my opinion, programs can explain it away if we accept the prelude that we are complex physical machines.
The only other option I can conceive is that their is some magic giving power to the brain over the physical world. But I won't accept it as I don't see any proof that it might be the case (just like I don't but garlic on my front door because nothing indicate the existence of vampires).

The brain being to blind to know it react instead of actually doing is moot to me as if it's actually the original of actions it would break causality.

sciborg2

  • *
  • Old Name
  • *****
  • Contrarian Wanker
  • Posts: 1173
  • "Trickster Makes This World"
    • View Profile
« Reply #86 on: December 08, 2019, 10:29:34 pm »
I've gone through some of Bakker's stuff, and eliminativism did seem like a live possibility but then I read Alex Rosenberg's stuff about Intentionality in Atheist's Guide to Reality where he says we simply have to be wrong about having thoughts:

Quote
"A more general version of this question is this: How can one clump of stuff anywhere in the universe be about some other clump of stuff anywhere else in the universe—right next to it or 100 million light-years away?

...Let’s suppose that the Paris neurons are about Paris the same way red octagons are about stopping. This is the first step down a slippery slope, a regress into total confusion. If the Paris neurons are about Paris the same way a red octagon is about stopping, then there has to be something in the brain that interprets the Paris neurons as being about Paris. After all, that’s how the stop sign is about stopping. It gets interpreted by us in a certain way. The difference is that in the case of the Paris neurons, the interpreter can only be another part of the brain...

What we need to get off the regress is some set of neurons that is about some stuff outside the brain without being interpreted—by anyone or anything else (including any other part of the brain)—as being about that stuff outside the brain. What we need is a clump of matter, in this case the Paris neurons, that by the very arrangement of its synapses points at, indicates, singles out, picks out, identifies (and here we just start piling up more and more synonyms for “being about”) another clump of matter outside the brain. But there is no such physical stuff.

Physics has ruled out the existence of clumps of matter of the required sort...

…What you absolutely cannot be wrong about is that your conscious thought was about something. Even having a wildly wrong thought about something requires that the thought be about something.

It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all...When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong."

The idea we don't have thoughts about things, Intentionality....it seems to me the correct conclusion is materialism is false not that Cogito Ergo Sum is a mistake.
I don't understand the argument. Couldn't you just as easily make an analogy consisting of say, a robot with a camera? The camera takes as input photons from the surroundings and creates an output consisting of an array of pixels or something upon which further computations are then done in order to make some decision according to some goal function. There's no infinite regress here. Generally I don't like comparing a human brain with a piece of software but I think this is one case where the analogy makes sense, except you have a lot of higher order representations, computations etc. going on because you literally have like a trillion interconnected cells.

Regarding our similarity to other organisms...I mean bees apparently understand the concept of Zero so perhaps mentality goes down further than we think, maybe even as deep as the panpsychics suggest.  ;)


Lol at the dog - but re: software...isn't this just an instantiation of a Turing Machine, in which case the calculations only have the meaning we give?

I mean any bit string can be interpreted differently, which is not to say every string of 0's and 1's can be every program imaginable but at the very least it seems any such string can represent a countably infinite number of programs?

I guess I don't see much difference between a computer and an abacus in terms of holding some aboutness in the material?

If we consider that a human brain can be (even just vaguely) associated to a computer (and personaly I think the association make sens in this case), you don't need to give meaning to the calculation.

A software receive one (or many) input and return an output. The output is the "physical" manifestation of the calculation, independently of the "meaning".
We can think of the brain the same, we have stimulus/inputs through our senses, and we output some actions.

I don't see how the fact that our brain is freaking complicated, and that its internal trillions of neuronal activities/"operations" per seconds trick itself into "consciousness" change anything or counter the Argument.

But in the end, philosophy won't explain anything, all we can do is wait for science to give an answer. We can speculate but it's just that, speculation.

Except you do have thoughts about things, which is where the meaning question comes from as it refers to  Aboutness of Thought (what Bakker & other philosophers call Intentionality) which Bakker thinks can be reduced to a physics explanation (matter, energy, forces, etc).

To me Eliminativism toward Intentionality is the central point of Bakker's BBT, so everything turns on this issue. So because the correctness of a program depends on the intention of the programmer - it's the only way to tell the difference between an accidental bug and deliberate sabotage - I'd say programs cannot explain away Intentionality.

Admittedly there are other issues at play, like the nature of causation and mental causation, but given that we use Intentionaltiy to find interest relative causal chains it's probably a good starting point to refute the Argument. Which is - IIRC - all I was getting at there.

As for whether science can decide this issue...I'd agree with you if you're talking about something like a revised version of the Libet type experiements (the current set apparently got debunked), but not if we're talking about eliminativism of Intentionality. I just don't see how someone can do anything but find correlations since -as per above- finding a causal explanation for Intentionality would require Intentionality.

>So because the correctness of a program depends on the intention of the programmer - it's the only way to tell the difference between an accidental bug and deliberate sabotage - I'd say programs cannot explain away Intentionality.

That why analogies are just anologies and nothing more.
The correctness of a program isn't the issue here, first because even a bugged/sabotaged program will get inputs and return outputs independently of the original intention. Then because we are obviously not programs but complex random evolved chimestry, and not programmed to do something specific by someone else.

So in my opinion, programs can explain it away if we accept the prelude that we are complex physical machines.
The only other option I can conceive is that their is some magic giving power to the brain over the physical world. But I won't accept it as I don't see any proof that it might be the case (just like I don't but garlic on my front door because nothing indicate the existence of vampires).

The brain being to blind to know it react instead of actually doing is moot to me as if it's actually the original of actions it would break causality.

But the "garlic on the door" is assuming that some Holy Grail programs (namely the ones in our brain) have self-awareness and determinate thoughts and others are just atoms in the void?

There will be input and output into minds, but minds also have thoughts referencing aspects of reality - the "Aboutness" aspect of our thinking philosophers call Intentionality.

This all goes back to the Alex Rosenberg quote - how can one clump of matter (neurons) be about another clump of matter (Paris) when matter does not have intrinsic representation? It's less my issue with programs lacking meaning than this question.

Alex Rosenberg and Bakker are correct (AFAICTell) that the only conclusion for those committed to Physicalism is this seeming Aboutness has to be false in some sense, but I just cannot see how that could be.

H

  • *
  • The Zero-Mod
  • Old Name
  • *****
  • The Honourable H
  • Posts: 2893
  • The Original No-God Apologist
    • View Profile
    • The Original No-God Apologist
« Reply #87 on: December 09, 2019, 01:48:02 pm »
Alex Rosenberg and Bakker are correct (AFAICTell) that the only conclusion for those committed to Physicalism is this seeming Aboutness has to be false in some sense, but I just cannot see how that could be.

From Rosenberg's latest book:
Quote
What makes the neurons in the hippocampus and the medial entorhinal cortex of the rat into grid cells and place cells—cells for location and direction? Why do they have that function, given that structurally they are pretty much like many other neurons throughout both the rat and the human brain?
From as early in evolution as the emergence of single-cell creatures, there was selection for any mechanism that just happened to produce environmentally appropriate behavior, such as being in the right place at the right time. In single-cell creatures, there are “organelles” that “detect” gradients in various chemicals or environmental factors (sugars, salts, heat, cold, even magnetic fields). “Detection” here simply means that, as these gradients strengthen or weaken, the organelles change shape in ways that cause their respective cells to move toward or away from the chemicals or factors as the result of some quite simple chemical reactions. Cells with organelles that happened to drive them toward sugars or away from salts survived and reproduced, carrying along these adaptive organelles. The cells whose organelles didn’t respond this way didn’t survive. Random variations in the organelles of other cells that just happened to convey benefits or advantages or to meet those cells’ survival or reproductive needs were selected for.
The primitive organelles’ detection of sugars or salts consisted in nothing more than certain protein molecules inside them changing shape or direction of motion in a chemical response to the presence of salt or sugar molecules. If enough of these protein molecules did this, the shape of the whole cell, its direction, or both would change, too. If cells contained organelles with iron atoms in them, the motion of the organelles and the cells themselves would change as soon as the cells entered a magnetic field. If this behavior enhanced the survival of the cells, the organelles responsible for the behavior would be called “magnetic field detectors.” There’d be nothing particularly “detecting” about these organelles, however, or the cells they were part of. The organelles and cells would just change shape or direction in the presence of a magnetic field in accordance with the laws of physics and chemistry.
The iterative process of evolution that Darwin discovered led from those cells all the way to the ones we now identify as place and grid cells in the rat’s brain. The ancestors to these cells—the earliest place and grid cells in mammals—just happened to be wired to the rest of the rat’s ancestors’ neurology, in ways that just happened to produce increasingly adaptive responses to the rat’s ancestors’ location and direction. In other mammals, these same types of cells happened to be wired to the rest of the neurology in a different way, one that moved the evolution of the animal in a less-adaptive direction. Mammals wired up in less-adaptive ways lost out in the struggle for survival. Iteration (repetition) of this process produced descendants with neurons that cause behavior that is beautifully appropriate to the rat’s immediate environment. So beautifully appropriate, that causing the behavior is their function.
The function of a bit of anatomy is fixed by the particular adaptation that natural selection shaped it to deliver. The process is one in which purpose, goal, end, or aim has no role. The process is a purely “mechanical” one in which there are endlessly repeated rounds of random or blind variation followed by a passive process of environmental filtration (usually by means of competition to leave more offspring). The variation is blind to need, benefit, or advantage; it’s the result of a perpetual throwing of the dice in mixing genes during sex and mutation in the genetic code that shapes the bits of anatomy. The purely causal process that produces functions reveals how Darwin’s theory of natural selection banishes purpose even as it produces the appearance of purpose; the environmental appropriateness of traits with functions tempts us to confer purpose on them.
What makes a particular neuron a grid cell or a place cell? There’s nothing especially “place-like” or “grid-like” about these cells. They’re no different from cells elsewhere in the brain. The same goes for the neural circuits in which they are combined. What makes them grid cells and place cells are the inputs and outputs that natural selection linked them to. It is one that over millions of years wired up generations of neurons in their location in ways that resulted in ever more appropriate responses for given sensory inputs from the rat’s location and direction.
Evolutionary biology identifies the function of the grid and place cells in the species Rattus rattus by tracing the ways in which environments shaped cells in the hippocampus and entorhinal cortex of mammalian nervous systems to respond appropriately (for the organism) to location and direction. Their having that function consists in their being shaped by a particular Darwinian evolutionary process.
But what were the “developmental” details of how these cells were wired up to do this job in each individual rat’s brain? After all, rats aren’t born with all their grid and place cells in place (Manns and Eichenbaum, 2006). So how do they get “tuned” up to carry continually updated environmentally appropriate information about exactly where the rat is and which way the rat needs to go for food or to avoid cats? Well, this is also a matter of variation and selection by operant conditioning in the rat brain, one in which there is no room for according these cells “purpose” (except as a figure of speech, like the words “design problem” and “selection” that are used as matters of convenience in biology even though there is no design and no active process of selection in operation at all).
Like everything else in the newborn rat’s anatomy, neurons are produced in a sequence and quantity determined by the somatic genes in the rat fetus. Once they multiply, the neurons in the hippocampus and the entorhinal cortex, and many other neurons in the rat’s brain as well, make and unmake synaptic connections with each other. Synaptic connections that lead to behavior rewarded by the environment, such as finding the mother’s teat, are repeated and thus strengthened physically (by the process Eric Kandel discovered; Kandel, 2000). Among the connections made, many are then unmade because they lead to behaviors that are not rewarded by feedback processes that strengthen the synaptic connections physically. Some are even “punished” by processes that interrupt them. In the infant rat, the place cells make contact with the grid cells by just such a process in the first three weeks of life, enabling the rat’s brain to respond so appropriately to its environment that these cells are now called “place” and “grid” cells (O’Keefe and Dostrovsky, 1979). Just as in the evolution of grid and place cells over millions of years, so also in their development in the brain of a rat pup, there is no room whatever for purpose. It’s all blind variation, random chance, and the passive filtering of natural selection.
These details about how the place cells and the grid cells got their functions are important here for two reasons. First, they reflect the way that ’natural selection drives any role for a theory of mind completely out of the domain of biology, completing what Newton started for the domain of physics and chemistry. They show how the appearance of design by some all-powerful intelligence is produced mindlessly by purely mechanical processes (Dennett, 1995). And they make manifest that the next stage in the research program that began with Newton is the banishment of the theory of mind from its last bastion—the domain of human psychology.
Second, these details help answer a natural question to which there is a tempting but deeply mistaken answer. If the grid cells and the place cells function to locate the rat’s position and direction of travel, why don’t they contain or represent its location and direction? If they did, wouldn’t that provide the very basis for reconciling the theory of mind with neuroscience after all? This line of reasoning is so natural that it serves in part to explain the temptation to accord content to the brain in just the way that makes the theory of mind hard to shake. By now, however, it’s easy to see why this reasoning is mistaken. For one thing, if the function of the place and grid cells really makes them representations of direction and location, then every organ, tissue, and structure of an organism with a function would have the same claim on representing facts about the world.
Consider the long neck of the giraffe, whose function is to reach the tasty leaves high up in the trees that shorter herbivores can’t reach, or the white coat of the polar bear whose function is to camouflage the bear from its keen-eyed seal prey in the arctic whiteness. Each has a function because both are the result of the same process of random or blind variation and natural selection that evolved the grid cells in the rat. Does the giraffe’s neck being long represent the fact that the leaves it lets the giraffe reach are particularly tasty? Is the coat of the polar bear about the whiteness of its arctic environment or about the keen eyesight of the seals on which the bear preys? Is there something about the way the giraffe’s neck is arranged that says, “There are tasty leaves high up in the trees that shorter herbivores can’t reach”? Is there something about the white coat of the polar bear that expresses the fact that it well camouflages the bear from its natural prey, seals? Of course not.
But even though they don’t represent anything, the long neck of the giraffe and the white coat of the polar bear are signs: the long neck is a sign that there are tasty leaves high in the trees on the savanna, and the white coat is a sign that the bear needs to camouflage itself from its prey in the whiteness of the arctic, the way clouds are signs that it may rain. But for the neck and coat to also be symbols, to represent, to have the sort of content the theory of mind requires, there’d have to be someone or something to interpret them as meaning tasty leaves or a snowy environment. Think back to why red octagon street signs are symbols of the need to stop—symbols we interpret as such—and not merely signs of that need.
The sign versus symbol distinction is tricky enough to have eluded most neuroscientists. The firing of a grid cell is a good sign of where the rat is. It allows the neuroscientist to make a map of the rat’s space, plot where it is and where it’s heading. John O’Keefe called this a “cognitive map,” following Edward Tolman (1948). The “map,” however, is the neuroscientist’s representation. The rat isn’t carrying a map around with it, to consult about where it is and where it’s heading. Almost all neuroscientists use the word “representation,” which in more general usage means “interpreted symbol,” in this careless way—to describe what is actually only a reliable sign. (See Moser et al., 2014 for a nice example.) The mistake is usually harmless since neuroscientists aren’t misled into searching for some other part of the brain that interprets the neural circuit firing and turns it into a representation. In fact, most neuroscientists have implicitly redefined “representation” to refer to any neural state that is systematically affected by changes in sensory input and results in environmentally appropriate output, in effect, cutting the term “representation” free from the theory of mind, roughly the way evolutionary biologists have redefined “design problem” to cut it free from the same theory.

I too think Rosenberg and Bakker are "right" even if they aren't 100% correct necessarily.  But, far be it from me to think I fully understand Rosenberg's point.  I think you'd find the book interesting though Sci.
I am a warrior of ages, Anasurimbor. . . ages. I have dipped my nimil in a thousand hearts. I have ridden both against and for the No-God in the great wars that authored this wilderness. I have scaled the ramparts of great Golgotterath, watched the hearts of High Kings break for fury. -Cet'ingira

sciborg2

  • *
  • Old Name
  • *****
  • Contrarian Wanker
  • Posts: 1173
  • "Trickster Makes This World"
    • View Profile
« Reply #88 on: December 09, 2019, 09:00:36 pm »
Alex Rosenberg and Bakker are correct (AFAICTell) that the only conclusion for those committed to Physicalism is this seeming Aboutness has to be false in some sense, but I just cannot see how that could be.

From Rosenberg's latest book:
Quote
What makes the neurons in the hippocampus and the medial entorhinal cortex of the rat into grid cells and place cells—cells for location and direction? Why do they have that function, given that structurally they are pretty much like many other neurons throughout both the rat and the human brain?
From as early in evolution as the emergence of single-cell creatures, there was selection for any mechanism that just happened to produce environmentally appropriate behavior, such as being in the right place at the right time. In single-cell creatures, there are “organelles” that “detect” gradients in various chemicals or environmental factors (sugars, salts, heat, cold, even magnetic fields). “Detection” here simply means that, as these gradients strengthen or weaken, the organelles change shape in ways that cause their respective cells to move toward or away from the chemicals or factors as the result of some quite simple chemical reactions. Cells with organelles that happened to drive them toward sugars or away from salts survived and reproduced, carrying along these adaptive organelles. The cells whose organelles didn’t respond this way didn’t survive. Random variations in the organelles of other cells that just happened to convey benefits or advantages or to meet those cells’ survival or reproductive needs were selected for.
The primitive organelles’ detection of sugars or salts consisted in nothing more than certain protein molecules inside them changing shape or direction of motion in a chemical response to the presence of salt or sugar molecules. If enough of these protein molecules did this, the shape of the whole cell, its direction, or both would change, too. If cells contained organelles with iron atoms in them, the motion of the organelles and the cells themselves would change as soon as the cells entered a magnetic field. If this behavior enhanced the survival of the cells, the organelles responsible for the behavior would be called “magnetic field detectors.” There’d be nothing particularly “detecting” about these organelles, however, or the cells they were part of. The organelles and cells would just change shape or direction in the presence of a magnetic field in accordance with the laws of physics and chemistry.
The iterative process of evolution that Darwin discovered led from those cells all the way to the ones we now identify as place and grid cells in the rat’s brain. The ancestors to these cells—the earliest place and grid cells in mammals—just happened to be wired to the rest of the rat’s ancestors’ neurology, in ways that just happened to produce increasingly adaptive responses to the rat’s ancestors’ location and direction. In other mammals, these same types of cells happened to be wired to the rest of the neurology in a different way, one that moved the evolution of the animal in a less-adaptive direction. Mammals wired up in less-adaptive ways lost out in the struggle for survival. Iteration (repetition) of this process produced descendants with neurons that cause behavior that is beautifully appropriate to the rat’s immediate environment. So beautifully appropriate, that causing the behavior is their function.
The function of a bit of anatomy is fixed by the particular adaptation that natural selection shaped it to deliver. The process is one in which purpose, goal, end, or aim has no role. The process is a purely “mechanical” one in which there are endlessly repeated rounds of random or blind variation followed by a passive process of environmental filtration (usually by means of competition to leave more offspring). The variation is blind to need, benefit, or advantage; it’s the result of a perpetual throwing of the dice in mixing genes during sex and mutation in the genetic code that shapes the bits of anatomy. The purely causal process that produces functions reveals how Darwin’s theory of natural selection banishes purpose even as it produces the appearance of purpose; the environmental appropriateness of traits with functions tempts us to confer purpose on them.
What makes a particular neuron a grid cell or a place cell? There’s nothing especially “place-like” or “grid-like” about these cells. They’re no different from cells elsewhere in the brain. The same goes for the neural circuits in which they are combined. What makes them grid cells and place cells are the inputs and outputs that natural selection linked them to. It is one that over millions of years wired up generations of neurons in their location in ways that resulted in ever more appropriate responses for given sensory inputs from the rat’s location and direction.
Evolutionary biology identifies the function of the grid and place cells in the species Rattus rattus by tracing the ways in which environments shaped cells in the hippocampus and entorhinal cortex of mammalian nervous systems to respond appropriately (for the organism) to location and direction. Their having that function consists in their being shaped by a particular Darwinian evolutionary process.
But what were the “developmental” details of how these cells were wired up to do this job in each individual rat’s brain? After all, rats aren’t born with all their grid and place cells in place (Manns and Eichenbaum, 2006). So how do they get “tuned” up to carry continually updated environmentally appropriate information about exactly where the rat is and which way the rat needs to go for food or to avoid cats? Well, this is also a matter of variation and selection by operant conditioning in the rat brain, one in which there is no room for according these cells “purpose” (except as a figure of speech, like the words “design problem” and “selection” that are used as matters of convenience in biology even though there is no design and no active process of selection in operation at all).
Like everything else in the newborn rat’s anatomy, neurons are produced in a sequence and quantity determined by the somatic genes in the rat fetus. Once they multiply, the neurons in the hippocampus and the entorhinal cortex, and many other neurons in the rat’s brain as well, make and unmake synaptic connections with each other. Synaptic connections that lead to behavior rewarded by the environment, such as finding the mother’s teat, are repeated and thus strengthened physically (by the process Eric Kandel discovered; Kandel, 2000). Among the connections made, many are then unmade because they lead to behaviors that are not rewarded by feedback processes that strengthen the synaptic connections physically. Some are even “punished” by processes that interrupt them. In the infant rat, the place cells make contact with the grid cells by just such a process in the first three weeks of life, enabling the rat’s brain to respond so appropriately to its environment that these cells are now called “place” and “grid” cells (O’Keefe and Dostrovsky, 1979). Just as in the evolution of grid and place cells over millions of years, so also in their development in the brain of a rat pup, there is no room whatever for purpose. It’s all blind variation, random chance, and the passive filtering of natural selection.
These details about how the place cells and the grid cells got their functions are important here for two reasons. First, they reflect the way that ’natural selection drives any role for a theory of mind completely out of the domain of biology, completing what Newton started for the domain of physics and chemistry. They show how the appearance of design by some all-powerful intelligence is produced mindlessly by purely mechanical processes (Dennett, 1995). And they make manifest that the next stage in the research program that began with Newton is the banishment of the theory of mind from its last bastion—the domain of human psychology.
Second, these details help answer a natural question to which there is a tempting but deeply mistaken answer. If the grid cells and the place cells function to locate the rat’s position and direction of travel, why don’t they contain or represent its location and direction? If they did, wouldn’t that provide the very basis for reconciling the theory of mind with neuroscience after all? This line of reasoning is so natural that it serves in part to explain the temptation to accord content to the brain in just the way that makes the theory of mind hard to shake. By now, however, it’s easy to see why this reasoning is mistaken. For one thing, if the function of the place and grid cells really makes them representations of direction and location, then every organ, tissue, and structure of an organism with a function would have the same claim on representing facts about the world.
Consider the long neck of the giraffe, whose function is to reach the tasty leaves high up in the trees that shorter herbivores can’t reach, or the white coat of the polar bear whose function is to camouflage the bear from its keen-eyed seal prey in the arctic whiteness. Each has a function because both are the result of the same process of random or blind variation and natural selection that evolved the grid cells in the rat. Does the giraffe’s neck being long represent the fact that the leaves it lets the giraffe reach are particularly tasty? Is the coat of the polar bear about the whiteness of its arctic environment or about the keen eyesight of the seals on which the bear preys? Is there something about the way the giraffe’s neck is arranged that says, “There are tasty leaves high up in the trees that shorter herbivores can’t reach”? Is there something about the white coat of the polar bear that expresses the fact that it well camouflages the bear from its natural prey, seals? Of course not.
But even though they don’t represent anything, the long neck of the giraffe and the white coat of the polar bear are signs: the long neck is a sign that there are tasty leaves high in the trees on the savanna, and the white coat is a sign that the bear needs to camouflage itself from its prey in the whiteness of the arctic, the way clouds are signs that it may rain. But for the neck and coat to also be symbols, to represent, to have the sort of content the theory of mind requires, there’d have to be someone or something to interpret them as meaning tasty leaves or a snowy environment. Think back to why red octagon street signs are symbols of the need to stop—symbols we interpret as such—and not merely signs of that need.
The sign versus symbol distinction is tricky enough to have eluded most neuroscientists. The firing of a grid cell is a good sign of where the rat is. It allows the neuroscientist to make a map of the rat’s space, plot where it is and where it’s heading. John O’Keefe called this a “cognitive map,” following Edward Tolman (1948). The “map,” however, is the neuroscientist’s representation. The rat isn’t carrying a map around with it, to consult about where it is and where it’s heading. Almost all neuroscientists use the word “representation,” which in more general usage means “interpreted symbol,” in this careless way—to describe what is actually only a reliable sign. (See Moser et al., 2014 for a nice example.) The mistake is usually harmless since neuroscientists aren’t misled into searching for some other part of the brain that interprets the neural circuit firing and turns it into a representation. In fact, most neuroscientists have implicitly redefined “representation” to refer to any neural state that is systematically affected by changes in sensory input and results in environmentally appropriate output, in effect, cutting the term “representation” free from the theory of mind, roughly the way evolutionary biologists have redefined “design problem” to cut it free from the same theory.

I too think Rosenberg and Bakker are "right" even if they aren't 100% correct necessarily.  But, far be it from me to think I fully understand Rosenberg's point.  I think you'd find the book interesting though Sci.

Great quote - I disagree w/ Alex Rosenberg but I think more than most he correctly identifies the reality of the Physicalist position, the acceptance that matter (and thus the brain if it's matter) cannot be about anything.

Curious - why do you think the eliminativist position is correct? I could easily throw out free will if there was enough evidence, but the idea we don't have thoughts is a step beyond my boggle threshold.

H

  • *
  • The Zero-Mod
  • Old Name
  • *****
  • The Honourable H
  • Posts: 2893
  • The Original No-God Apologist
    • View Profile
    • The Original No-God Apologist
« Reply #89 on: December 10, 2019, 03:05:12 pm »
Great quote - I disagree w/ Alex Rosenberg but I think more than most he correctly identifies the reality of the Physicalist position, the acceptance that matter (and thus the brain if it's matter) cannot be about anything.

Curious - why do you think the eliminativist position is correct? I could easily throw out free will if there was enough evidence, but the idea we don't have thoughts is a step beyond my boggle threshold.
Well, I don't know.  I do think that "aboutness" is likely false.  Past that, I do think that "conditioning" of some sort, is likely most of what "mind" is/does.  Does this mean that anything and everything we attribute to mind is wholly incorrect?  I don't know, but I don't think it must be that.

Maybe I could say my idea is that the phenomena of "mind" is both not what we think it is (with aboutness and the like), but also is not just "bare" neuronal firing.  To me, maybe this is because consciousness (and more importantly self-consciousness) is both recursive and relational.

Recursive in the sense that being self-aware means you are in a sort of feedback loop, one that allows you to "see" yourself and (to some degree or other) modify or influence your own thoughts/behaviors.  Relational, because I don't think any "thought" whatever that might be, or not be, "stands alone."  In the same way that the letters C-A-T have anything, in-themselves, to do with a four legged animal, no thought, in-itself, has any "stand alone" thing like meaning, outside it's relations.  Maybe in a sort of mereology, each part is both a part, but an individual part does not inform us of the whole, being only a part of the whole.

So, to say that some neuronal "part" is the whole would be like saying that hydrogen atoms, one in a cat and one in a star, explains both cats and stars for us.  Or, let us pretend you own a house.  Then there is an earthquake and the house collapses.  We could be an "imaginative physicalist" (i guess) and say, your house before and after are the exact same things.  Still the same number of atoms, still the same atomic, molecular and chemical compositions.  In other words, a summary physical survey says both things (the house before, the pile of rubble now) are the "same."  Except, of course, no one would say that, because it facile to see that the two are not the same at all.  One had a definite, "meaningful" structure and relation, where the other is structured and related (in a manner of speaking) only to the manner in which it was before and in collapsing.

So, to say that "mind" is only neuronal activity sort of seems, to me, to be akin to saying the house and the pile of rubble are the same thing.  Except, of course, they aren't, because the structure and relation are keys to what makes a "whole" of it's constituent parts.

Of course, I am not smart of credentialed, so maybe that is a whole line of crap.
I am a warrior of ages, Anasurimbor. . . ages. I have dipped my nimil in a thousand hearts. I have ridden both against and for the No-God in the great wars that authored this wilderness. I have scaled the ramparts of great Golgotterath, watched the hearts of High Kings break for fury. -Cet'ingira