Marelosion

Exploded Sundries
(Tags flexible, inquire within (the ask box).)

Roommate said to someone on the phone that he “Got off when he wanted to,” and I made that like “Oh hO-oh” laugh like “that sounded dirty and I’m laughing about it” laugh but since he’s on the phone he couldn’t appreciate it so now I’m sharing it with you.

http://marelo.tumblr.com/post/92580610511/quoiquecesoit-replied-to

tynic:

marelo:

was your post inspired by that photoset of the film though? i’d never heard of it before but it looks good. i think?

I did indeed think about the Chinese Room because of that photoset!

Following on from your post:

One interesting thing to me about this thought experiment is that, while ‘Chinese’ in this sense is simply meant to stand in for any language you have no frame of reference for, one which has no recognizable antecedents or links with your own language - written standard Chinese may actually be one of the worst languages to pick as an example, as many characters still have a meaningful ideographic formulation. Aside from direct pictographic representation, the radical system can even sometimes allow meaning to be inferred from an unfamiliar textual element. But to come to this understanding, lacking a Rosetta, the naive user would have to make one (or many) quite massive intuitive leaps. They might have to somehow develop a recognition that, for example, the character 人 looks a little like a person - that the character 口 might in fact be a mouth. However, given enough time (and a sufficient lack of other stimulation) - is this so terribly implausible?
In considering the implications of this, we do end up circling back around to the problem of strong AI. I am not going to pretend for a moment that you can actually understand written Chinese in any depth based purely on what the characters look like; even the most basic elements have in many cases drifted far from their pictographic origins, and puzzling out a compound character based solely on the meaning of its constituent parts has a very low chance of success. But if one could feasibly make such an initial leap - could intuit that first link between orthography and meaning - what would you be drawing on? A lifetime of experience in abstraction and pictorial representation, certainly. A feeling for what is likely to be important in human communication. A strong skill in pattern recognition. Etc. We could further break down the elements required for that tremendous mental jump, and perhaps even quantify them. And having quantified, to a minute degree, the essential components, can we transfer them to a machine? 
The question then becomes, to me, can we endow machines with intuition? This seems on the face of it nonsensical - our understanding of intuition is that it is by nature illogical, that it requires you to leap from proposition to conclusion with no intermediate steps. But we are also (ironically?) terrible at intuiting the processes of our own brain. Is intuition simply reasoning based on operations too fast and subconscious for us to break them down and bring to conscious analysis? Aka, can it be reverse engineered? The problem of intuition then simply becomes one of mechanical complexity, which is nominally (if not realistically) reducible and therefore solvable.
I’m not a philosopher and not much of a neuroscientist, and I don’t want to dive into the ghost in the shell problem. I started replying mostly because the Chinese room to me always raises more questions than it answers. The one I usually get hung up on is that the biggest difference between locking a machine in a room and having it regurgitate set answers to opaque statements, and doing the same with a human, is eventually the human will get bored. It’s when we’re bored that we start looking for patterns, making up games, deliberately fucking around, or even trying to understand the impenetrable. Before trying to make a machine that understands language, if we can instead make a machine that gets bored, will we have constructed the basis for strong AI? Or is it a circular problem, and we first need to have the intelligence in order to get bored?

Reblogging for further thoughts!

was your post inspired by that photoset of the film though? i’d never heard of it before but it looks good. i think?

I did indeed think about the Chinese Room because of that photoset!

i can’t remember where the chinese room fitted into the narrative of the philosophy of mind class i took in undergrad because i think i was confusedly remembering it as a criticism of (fodorian?) functionalism

Generally it’s intended as a rejection of the idea of “strong AI” which is contrasted with “weak AI” to mean “an actual artificial mind that is a person in the same way that we are” versus “something that convincingly fakes personhood but is nonetheless not a person.”  This does bump up against functionalism, because it rejects the idea of “software” being part of the human mind’s mechanics, but the primary target of the Chinese Room is strong AI.  (Functionalism has problems of its own, likewise due to poor analogy structure.)

The Chinese Room itself is pretty weak stuff, but, as Tynic points out, it’s held in oddly high esteem.  When you get right down to it, the argument rests on the assumption that our minds are something beyond material (hence my barb about metaphysical snowflakes), and that, therefore, no material computer can ever properly create or embody a mind.  This is utter bunk from a materialist perspective, since our brains are material computers and they embody minds.

Basically it goes like this:  If you can pass the Turing test without actually knowing the language in which you pass it (via extensive pre-determined instructions), that demonstrates that possessing an algorithmic program of that language is not the same as actually speaking it.  The analogy is then made between this situation and one of an algorithmic program that claims to be a mind, with the conclusion that even the most well-simulated mind isn’t actually a mind and doesn’t understand consciousness in the way that humans do.  That is, the simulated mind lacks a “Chinese speaker” even if it does all the things you expect a real mind to do.

The biggest problem, I find, is that the program as described is not a well-simulated mind in the first place.  Minds have perspectives (where “perspective” means “concept of self and thus ability to self-direct”).  Static systems of instructions do not have this; our brains do, and any actually well-simulated mind would have it as well (no dualism required).  The Chinese Room thus doesn’t prove anything except that languages and language algorithms aren’t conscious.  It certainly doesn’t have anything useful to say about strong AI.

There’s also the issue I started all this with, which is that the Chinese Room couldn’t ever actually pass the Turing test over time, because Searle doesn’t understand linguistics, like, at all.  Plus any chatbot (which is what the Chinese Room boils down to) can be easily tricked once you know how it works.  Feed the Chinese Room the same sentence over and over and over again.  As described by Searle, you’ll get the exact same response, over and over and over again, which is not what an actual speaker would do.  Even redundancies like “if second time, do this instead” would eventually run out.

It was an analogy I always found immensely frustrating. It’s a poor one, I don’t know why it gained such a foothold.

To take a phrase from Searle himself, its adherents were probably “under the grip of an ideology.”  Which is to say, dualism and its insistence that we are special metaphysical snowflakes.