Is it consciousness that is needed for language use, or the ability to respond to language use in a dynamic fashion that requires some ofrm of intentionality? This intentionality might be consciousness, I don't know as this is your story, not mine. But just thinking about the sceptical solution to language use as a form of chess game where new moves are continually created, and it is this creation that is language use does imply that it is unlikely to create an AI that participates in language without giving it some form of self motivated and dynamic intention, which incidently is probably what is required for it to be ethical.
Answering your first question depends on what you mean by 'language use'. I'll try to post something that clarifies some of my concerns. Having said that I think our suspicions are running along a similar track. I'm not sure that intentionality is an essential part of conciousness, but it does seem to be pretty involved in language use (once again depending on what you mean by 'use' etc). Where I am finding difficulty is explaining different things 'using' language. Can something that is like me but has no qualia, ie a philosophical zombie, have any meaning attached to its words?
Would such a philosophical zombie be, say a cat, who probably does not 'mean' anything by the utterance 'mew' but is still motivated when doing so? My cat has developed some clever ways of communicating what she wants to me,i.e. a mew followed by sitting patiently by her bowl may indicate she wants food, but I doubt anything is 'meant' by any of the components, nor would it be sufficient to be considered a language. Information has been conveyed (the cat's desire for food), but very little communicated. This probably makes my first comment now redundant as there is clearly intention in the cats' 'communication' but no actual meaning, however it possibly also questions Martin's hypothesis that 'agreement' is what produces meaning, as there is agreement between the cat and I in this communication, but nothing that is actually 'meant'.
So I can appreciate where you are heading with the interaction of minds idea - something needs to have a belief to be communicated and another needs to recieve that belief, identify it seperately to the act of communication but at the same time link both the act and the belief, so as to say that anything has been meant and communicated. The problem is that often the belief and the act will often conflict or bear little relation to each other as they are almost certainly autonomous of one another.
Rowan, I think you are on the right track there. Cats are a great aid to any signifigant thinker eh? Seriously I have been moving to a similar conclusion where there is a need to somehow link beliefs that are trying to be communicated from one mind to another, to the act, but still keeping them separate, as there seems to be a number of things going on at once here.
5 Comments:
Is it consciousness that is needed for language use, or the ability to respond to language use in a dynamic fashion that requires some ofrm of intentionality? This intentionality might be consciousness, I don't know as this is your story, not mine. But just thinking about the sceptical solution to language use as a form of chess game where new moves are continually created, and it is this creation that is language use does imply that it is unlikely to create an AI that participates in language without giving it some form of self motivated and dynamic intention, which incidently is probably what is required for it to be ethical.
Answering your first question depends on what you mean by 'language use'. I'll try to post something that clarifies some of my concerns. Having said that I think our suspicions are running along a similar track. I'm not sure that intentionality is an essential part of conciousness, but it does seem to be pretty involved in language use (once again depending on what you mean by 'use' etc). Where I am finding difficulty is explaining different things 'using' language. Can something that is like me but has no qualia, ie a philosophical zombie, have any meaning attached to its words?
Yeah, a clarification would be much appreciated.
Would such a philosophical zombie be, say a cat, who probably does not 'mean' anything by the utterance 'mew' but is still motivated when doing so? My cat has developed some clever ways of communicating what she wants to me,i.e. a mew followed by sitting patiently by her bowl may indicate she wants food, but I doubt anything is 'meant' by any of the components, nor would it be sufficient to be considered a language. Information has been conveyed (the cat's desire for food), but very little communicated. This probably makes my first comment now redundant as there is clearly intention in the cats' 'communication' but no actual meaning, however it possibly also questions Martin's hypothesis that 'agreement' is what produces meaning, as there is agreement between the cat and I in this communication, but nothing that is actually 'meant'.
So I can appreciate where you are heading with the interaction of minds idea - something needs to have a belief to be communicated and another needs to recieve that belief, identify it seperately to the act of communication but at the same time link both the act and the belief, so as to say that anything has been meant and communicated. The problem is that often the belief and the act will often conflict or bear little relation to each other as they are almost certainly autonomous of one another.
Rowan, I think you are on the right track there. Cats are a great aid to any signifigant thinker eh? Seriously I have been moving to a similar conclusion where there is a need to somehow link beliefs that are trying to be communicated from one mind to another, to the act, but still keeping them separate, as there seems to be a number of things going on at once here.
Post a Comment
<< Home