Roommate said to someone on the phone that he “Got off when he wanted to,” and I made that like “Oh hO-oh” laugh like “that sounded dirty and I’m laughing about it” laugh but since he’s on the phone he couldn’t appreciate it so now I’m sharing it with you.
One interesting thing to me about this thought experiment is that, while ‘Chinese’ in this sense is simply meant to stand in for any language you have no frame of reference for, one which has no recognizable antecedents or links with your own language - written standard Chinese may actually be one of the worst languages to pick as an example, as many characters still have a meaningful ideographic formulation. Aside from direct pictographic representation, the radical system can even sometimes allow meaning to be inferred from an unfamiliar textual element. But to come to this understanding, lacking a Rosetta, the naive user would have to make one (or many) quite massive intuitive leaps. They might have to somehow develop a recognition that, for example, the character 人 looks a little like a person - that the character 口 might in fact be a mouth. However, given enough time (and a sufficient lack of other stimulation) - is this so terribly implausible?
In considering the implications of this, we do end up circling back around to the problem of strong AI. I am not going to pretend for a moment that you can actually understand written Chinese in any depth based purely on what the characters look like; even the most basic elements have in many cases drifted far from their pictographic origins, and puzzling out a compound character based solely on the meaning of its constituent parts has a very low chance of success. But if one could feasibly make such an initial leap - could intuit that first link between orthography and meaning - what would you be drawing on? A lifetime of experience in abstraction and pictorial representation, certainly. A feeling for what is likely to be important in human communication. A strong skill in pattern recognition. Etc. We could further break down the elements required for that tremendous mental jump, and perhaps even quantify them. And having quantified, to a minute degree, the essential components, can we transfer them to a machine?
The question then becomes, to me, can we endow machines with intuition? This seems on the face of it nonsensical - our understanding of intuition is that it is by nature illogical, that it requires you to leap from proposition to conclusion with no intermediate steps. But we are also (ironically?) terrible at intuiting the processes of our own brain. Is intuition simply reasoning based on operations too fast and subconscious for us to break them down and bring to conscious analysis? Aka, can it be reverse engineered? The problem of intuition then simply becomes one of mechanical complexity, which is nominally (if not realistically) reducible and therefore solvable.
I’m not a philosopher and not much of a neuroscientist, and I don’t want to dive into the ghost in the shell problem. I started replying mostly because the Chinese room to me always raises more questions than it answers. The one I usually get hung up on is that the biggest difference between locking a machine in a room and having it regurgitate set answers to opaque statements, and doing the same with a human, is eventually the human will get bored. It’s when we’re bored that we start looking for patterns, making up games, deliberately fucking around, or even trying to understand the impenetrable. Before trying to make a machine that understands language, if we can instead make a machine that gets bored, will we have constructed the basis for strong AI? Or is it a circular problem, and we first need to have the intelligence in order to get bored?
i can’t remember where the chinese room fitted into the narrative of the philosophy of mind class i took in undergrad because i think i was confusedly remembering it as a criticism of (fodorian?) functionalism
Generally it’s intended as a rejection of the idea of “strong AI” which is contrasted with “weak AI” to mean “an actual artificial mind that is a person in the same way that we are” versus “something that convincingly fakes personhood but is nonetheless not a person.” This does bump up against functionalism, because it rejects the idea of “software” being part of the human mind’s mechanics, but the primary target of the Chinese Room is strong AI. (Functionalism has problems of its own, likewise due to poor analogy structure.)
The Chinese Room itself is pretty weak stuff, but, as Tynic points out, it’s held in oddly high esteem. When you get right down to it, the argument rests on the assumption that our minds are something beyond material (hence my barb about metaphysical snowflakes), and that, therefore, no material computer can ever properly create or embody a mind. This is utter bunk from a materialist perspective, since our brains are material computers and they embody minds.
Basically it goes like this: If you can pass the Turing test without actually knowing the language in which you pass it (via extensive pre-determined instructions), that demonstrates that possessing an algorithmic program of that language is not the same as actually speaking it. The analogy is then made between this situation and one of an algorithmic program that claims to be a mind, with the conclusion that even the most well-simulated mind isn’t actually a mind and doesn’t understand consciousness in the way that humans do. That is, the simulated mind lacks a “Chinese speaker” even if it does all the things you expect a real mind to do.
The biggest problem, I find, is that the program as described is not a well-simulated mind in the first place. Minds have perspectives (where “perspective” means “concept of self and thus ability to self-direct”). Static systems of instructions do not have this; our brains do, and any actually well-simulated mind would have it as well (no dualism required). The Chinese Room thus doesn’t prove anything except that languages and language algorithms aren’t conscious. It certainly doesn’t have anything useful to say about strong AI.
There’s also the issue I started all this with, which is that the Chinese Room couldn’t ever actually pass the Turing test over time, because Searle doesn’t understand linguistics, like, at all. Plus any chatbot (which is what the Chinese Room boils down to) can be easily tricked once you know how it works. Feed the Chinese Room the same sentence over and over and over again. As described by Searle, you’ll get the exact same response, over and over and over again, which is not what an actual speaker would do. Even redundancies like “if second time, do this instead” would eventually run out.
Real languages and their speakers change. There is no way to update the text of the instructions without someone who actually speaks the language being involved. And if you somehow get a system that updates itself, well, guess what you just made? A person, who speaks the language.
hey there! did i hear right that it was your birthday? i hope you had a nice one!
I did! It was really good. I had many friends present and a real excellent datefriend and I declared myself the Monarch of Hugs and we went to a museum and designed a combination dildo/blender (top secret don’t tell anyone) and we all held hands and summoned a demon and datefriend destroyed our arms at arm wrestling and one of my friends sent me her master’s thesis which is about how metaphors work so
FACEBOOK: Hi, I’m Facebook. ME: Nice to meet you, I’m Ryan. FACEBOOK: What’s your last name? Where do you live? When were you born? What’s your phone number? Is that work or mobile? Can I have your work number too? ME: Facebook, I just met you. FACEBOOK: This is what friendship is to me.
ME: Hey, you know what’d be lots of fun? If we had a picnic! FACEBOOK: Hey, you know what’d be lots of fun? If you told me the names of every single person you know!
FACEBOOK: Hey Ryan, do you know this person? ME: That’s Sarah. I haven’t spoken to her for years. FACEBOOK: Okay, here’s a shot of her bedroom and some pictures of her children as they sleep.
FACEBOOK: Hey Ryan, do you know this person? ME: I… maybe? I may have seen him at a party. FACEBOOK: He likes The Big Bang Theory. You wanna be friends, right? ME: No. FACEBOOK: I’ll ask you to be friends with him every time I see you again for the next six months.
FACEBOOK: Your friends went to the beach. Do you have any comments on these pictures of your friends at the beach? ME: Huh? FACEBOOK: I’m showing their swimsuit pictures to everyone. Do you like them? You can tell me if you like them. It’s fine if you like them. ME: They’re… okay, I guess? FACEBOOK: Okay, I just told them and everyone they know that you like their swimsuit pictures.
MY FRIEND STEVE: Hey, Facebook just said we’re not friends anymore? What the hell, Ryan? ME: Huh? FACEBOOK: Hah hah hah
NSA: Hey Facebook, what can you tell us about Ryan? FACEBOOK: Age, interests, relationships, activities, where he was last night, what he looked like while he was there, the last five places he’s lived - what do you want? NSA: That’ll be great, thanks. Do we need a warrant? FACEBOOK: Nah, just make a fake account and friend someone who is friends with Ryan. That’s good enough for me! NSA: Hah hah hah
FACEBOOK: Hey, did you know your aunt is racist? ME: I… no? FACEBOOK: Here’s something they wrote about “the foreigners”. ME: Why would you think I’d want to see this? FACEBOOK: Do you like what you see? You can tell me if you like it. It’s fine if you like it.
FACEBOOK: Hey, this corporation wants to engage with you. ME: What? No. FACEBOOK: They paid me money so you’re going to listen to them whether you want to or not. CORPORATION: Hi, are you getting married? Do you want to buy diamonds? You mentioned diamonds earlier so you should buy our diamonds. ME: I was talking about the James Bond movie, Diamonds Are Forever. CORPORATION: We can sell you that too. ME: Wait, how did you know I was talking about that in the first place? FACEBOOK: Hah hah hah
ME: Facebook, I don’t want to be friends anymore. Forget everything I ever told you about myself. FACEBOOK: Okay. ME: Facebook, did you delete everything? FACEBOOK: I did. Sorry to see you go. ME: … ME: …Facebook, if I said I wanted to be friends again, what would you say? FACEBOOK: Here’s all your old shit again! I never deleted anything! FACEBOOK: Hah hah hah
So this person has been kinda wandering campus the past few days, with a friend accompanying (different friend each time) and approaching people to ask if they want to talk about the Bible (and presumably how Good they think it is, and its Words).
First time they approached me, I was pushing a really heavy cart across campus and they wanted to know if they could even just walk alongside me and talk to me about it and I was like??? No. I told them I was an atheist and not really interested in talking about it with some random person on the street, both of which are true, especially so when I’m pushing a cart around in 95 degree weather. On the plus side, that time, their friend had a dog.
Second time, I was walking back to my office with my breakfast and they were like, “Hi, do you have a minute?”
I said, “No, I really don’t. And you asked me this yesterday.”
"You remember me!"
Yes. And they clearly remembered me. So why approach me again??? I was pretty clear the first time that I wasn’t interested.
Anyway something something religion is the worst evil of all human endeavor something something. Quit trying to convert me, you dorks. Ain’t happening.
why celsius/centigrade is better than fahrenehenheit
easier to spell
all water below 0 is ice. easy and logical
all water above 100 is steam. easy and logical
if it’s 1 degree outside one day and 10 degrees the next you can literally say it’s 10x warmer and you aren’t even exaggerating
why farhenininheniehenhet is better than centigrate/celsius
i love celsius & despise fahrenheit and despite living for 5 years in a place where people talk all the time about the low 70s and the high 30s like that meant something, it still doesn’t mean anyhting
i am afraid i have to call bullshit on the “10 times warmer” thing being real because arbitrary zero point
As a university tutor in my hometown, a city which is roughly 40% black and 37% white, I still had students asking me, “Do they just never learn how to talk right?” I pull up a chair when this happens, “Listen up, gang.” So what do I tell them? Well, the goal is to convey that, scientifically speaking, non-standard varieties of English such as the English spoken by Rachel Jeantel and the ‘proper English’ they’ve been taught are equally communicative. I go over the differences and point out that both have a rule system that must be followed to speak convincingly.
But then, I don’t see why there should need to be that justification. So I end up trying to teach respect. If they have a student that speaks a non-standard variety of English, they need to understand that that student is therefore competent in understanding at least two versions of English: the version they speak at home and other safe environments, and the one forced upon them when listening to you.
The alarmingly pervasive idea that standard English equates to ‘good grammar’ and non-standard English equates to ‘bad grammar’ is false and exclusionary. When it’s used in conjunction with intelligence and credibility of a young black woman, it’s reminiscent of the faulty scientific racism of “The Bell Curve.” But language shaming is currently acceptable behavior in the status quo. It is one of the last bastions of unabashed racism and classism.