Showing posts with label philosophy. Show all posts
Showing posts with label philosophy. Show all posts

Friday, April 25, 2008

The silliness of philosophy

In Peter Unger's Beyond Inanity, he makes a point of saying that the claims he's attacking are guilty of being insubstantial, not silly. However, it strikes me that an awful lot of what gets done in philosophy is silly. I've even suggested that philosophy is so silly, that an obviously silly paper is actually an improvement over one that tries to hide its silliness.

What, exactly, is the problem? I can think of at least two things:

First, there's a tendency to want to be rigorous, sophisticated, and scientific, and this leads philosophers to invent technical concepts and apply them in situations where they serve no purpose. Take, for example, the concept of a possible world. A possible world is basically a possible situation, except that its emphasized that it is a "maximal" situation, including or excluding every detail that the world might possibly have. Sometimes, it's a useful concept. Sometimes you want to imagine a hypothetical world exactly like ours, except for a short list of very specific changes. Or sometimes, you want to imagine a world with a very small number of objects and nothing else. That's all okay. (Or at least not as bad as what I want to complain about here.)

However, philosophers have fallen in love with the concept of possible worlds, and begin invoking the concept in situations where it's useless or counter-productive. Situations where, most importantly, they aren't really concerned to have every detail fixed. For example, last semester in philosophy of mind we encountered a theory of the relationship between mind and matter (I think it was called "global supervenience") that was explained in terms of possible worlds. It turned out that an immediate consequence of the theory was that the position of a hydrogen atom in a distant galaxy might be vitally important in determining our mental states (thanks in part to possible worlds emphasizing the idea of every detail being fixed). And this is a theory which philosophers had seriously put forth. They never meant to say something so absurd, but did so because they got using technical concepts when they didn't need them.

Another example in this problem is adequately summed up in the second quotation in this post. Basically: philosophers wasting a lot of time on the nature of a certain claim, when all that mattered is whether it's true.

The other thing that's silly about philosophy: philosophers taking themselves way to seriously. For example, in the metaphysics class I'm taking right now, we basically spent over a week discussing the transporters from Star Trek. This was done with minimal self-awareness and irony. Now, debates on Star Trek message board about what this or that piece of technology would really be like can get quite heated (or so I've heard). However, Trekkies are at least capable of keeping the MST3K mantra stuck somewhere in the back of their minds, even if it's at the bottom of a chest in the attic of their brains, with the path to said chest blocked by a pile of lamps, coat hangers, and bicycles. The point is, it's there, they know it's just TV. In philosophy, however, there is no equivalent to the realization that it's just TV. It's serious business. And that makes all the discussions feel very off.

Tuesday, April 22, 2008

What matters: stuff

The trick is to know which books to read.
-Carl Sagan

When I reviewed Susan Jacoby's American Age of Unreason, I got a somewhat unexpected criticism: not so much that Jacoby was on-target in her worries about the use of the word "folks," but that I had no business agreeing that some thigns are better than others. Rather than reply immediately in the comments, I decided I'd bet better off replying in a series of blog posts.

This first post, I'll keep general. I want to point out that there are a couple things which can seem superficially an appropriate focus of life, yet pretty obviously aren't after a little reflection. A classic argument along these lines is Robert Nozick's "experience machine" thought experiment: would you take up an offer to plug in, for life, to a machine that would give you whatever good experiences you want? Most people say "no." An even worse answer than "good experiences" is "pleasure"--the idea of hooking up to electrodes to give a constant flow of intense pleasure isn't really all that far out, technologically speaking, but it's not something most people really want.

Once you accept that the truly good things in life are somewhat subtle, it becomes easy to see that it's worth putting effort into sorting out the truly good things from trash. Consider books, fiction or non-fiction (this is just an example, I'm hardly meaning to commit to anything on the value of books vs. TV or books vs. real world experience). In Cosmos, there's a great scene (in YouTube form at bottom of this post) where Sagan walks down the length of a few bookshelves, containing approximately as many books as a person could read in a lifetime. It's pretty good number, but it's only a fraction of the number contained in a good library. So assuming moderate variation in the quality of books, we've got good reason to be discerning. And this goes for TV shows and movies and websites and life experiences as much as books. Life's to short to watch "whatever's on."

Open Source Philosophy

The call for posts papers for the next Philosophers Carnival reads as follows:
This is a call for papers for the next Philosopher’s Carnival to be hosted here from April 28 to May 12.

The theme for our carnival will be ‘Open Source Philosophy’. This may relate to the Philosophy of Open Source or to Open sourcing Philosophy (or anything in between).

Entries from other fields of philosophy will still be most appreciated of course.
I'm inclined to take the second option. Honestly, though, I'm not sure what it means. Not that that's going to stop me.

The way I conceive of open-source philosophy (based on a half-baked guess about what the phrase means), it is the effect of rapid communication and digital technology on the way philosophical ideas develop. As early as 1991, Daniel Dennett was commenting on how e-mail was making the canonical, published version of a paper less important, and creating a situation where most of the people concerned would find out about a paper by reading a draft. (Dennett then famously suggested this as an analogy for consciousness, but that's not relevant to my post.) Now, that kind of interaction can happen before a paper is even written, thanks to blog posts. And websites that let you upload word documents and PDFs mean you don't even need e-mail to get a paper, you can just go to the relevant website.

I've actually done a fair amount of philosophical reading that way in the past couple of weeks. I read Richard Chappell's honors thesis, which I probably never would have gotten my hands on without the internet. I read the third chapter of Peter Unger's Beyond Inanity, and when I finish reading the last two available draft-chapters, I'll e-mail him with my thoughts. I also asked for the syllabus for a class on disagreement I didn't have the time to take, was warned some of the papers weren't published yet, and found them online anyway. That's the power of the internet for you.

All of this is very strange, in a way. There's nothing stopping me from citing any of the things I've read in what I write in the future, indeed one of those things, Thomas Kelly's Peer Disagreement and Higher Order Evidence cites an awful lot of papers as "forthcoming." The ideas in Beyond Inanity are compelling enough, and projected publication distant enough, that I doubt I'll be able to resist the temptation to cite (indeed, I just printed off a one-page piece solely for the purpose of getting teacher feedback, which referenced BI).

Another odd element is the idea of instant feedback, from anyone on the planet who wants to give it. The idea of a random undergraduate giving comments on a book draft to an established academic just isn't something that could have happened 30 years ago. I'm not sure what to make of it--I must face the possibility that the comments I send in will ultimately turn out to be drivel.

And aside from what you do with a draft after having read it, the experience of reading a draft is different than the experience of reading a finished product. The available BI drafts are things I would regard as frustratingly underdeveloped, if I encountered them as finished product, but as drafts I can get excited about what they do contain and the finished product that may one day come of them.

Having rambled on like this, I'm not sure what all of this means. That admission reminds me of the tag line of the great new blog Journal of Half-Baked Ideas, and it's almost tempting to submit this there, except it doesn't quite seem to be the sort of fare they're printing. Therefore I will just end this post without a definite conclusion. Except that the world is changing.

Wednesday, April 16, 2008

People without vital force

In a previous post, I criticized the zombie argument for dualism on the grounds that a similar argument could be made for vitalism. In response, Richard directed me to a previous post which touched on the vitalism and zombies issue:
But 'life' can clearly be analysed in functional and structural terms. There is no sense to be given to the notion of something that is functionally and structurally indiscernible from a duck, having all the same kinds of relations to other objects as another duck does, and yet somehow fails to really be a living duck. To be a living duck just is to have the right kinds of functional relations and so forth. There's nothing more to it than that.
Here, Richard almost talks as if the problem with the vitalism analogy is that the argument for vitalism is wrong. But the key thing is why it's wrong. It's wrong because we have good reason to reject our the pre-theoretical intuitions about the nature of life that some people have in a powerful form. Spend enough time around creationists, and soon you realize that many of them have an intuition that life is non-physical, so it would be impossible in principle for unguided matter to give rise to life. Sounds stupid? That's the point. We often mistake stupid ideas for profound philosophical insights.

Richard says life can be analyzed in functional terms (patterns of causal relations and such). I agree. But this isn't obvious, built into our pre-theoretical intuitions, or any such similar thing. If you want to analyze life functionally, you admit our intuitions about these things don't always give us the right answer. Conversely, some people think consciousness should be analyzed in functional terms. If they're correct, then consciousness is once again in the same boat as life.

Perhaps you dislike whatever theory happens to be the currently reigning functional analysis of consciousness. Perhaps you do so with reason. Still, we're in the beginning stages of understanding the brain. It's a false dilemma to say "either we have the right answer to this key issue already, or we have to accept our intuitions and not take seriously the possibility of ever getting another answer."

The burden of proof is on anyone who thinks there could be no physicalist account of consciousness. In my last post, I suggested there might be a good argument for that conclusion. But if there were, the dualist wouldn't need a zombie horde to do his dirty work. Invoking zombies is a weird way around this--it assumes we have reason to think consciousness isn't physical without ever providing the reason.

Richard has also recently written a post challenging the idea that thought experiments are question-begging. On this point, he brings in the analogy with the common sense belief that the Pope doesn't count as a bachelor. There's a disanalogy here, though: we're in a reasonably good position to answer that question based on our experience with how the term is used. In the zombie case, I don't know how we could know such things are possibit's mainly a question of what convinces people, but his interactions with Eliezer Yudkowsky suggested he thought many people who aren't convinced should be.

The most frustrating post on the zombie argument, though, has to be how to imagine zombies. There, Richard suggests that a world microphysically identical to ours would contain things like David Chalmers' book The Conscious Mind. This is a claim that should sound a caution in any good dualist's mind: if there is non-physical consciousness, it seems plausible to think that it affects the physical world, and most importantly is the reason philosophers like Chalmers write books like The Conscious Mind. The view that consciousness exists but has no such causal powers is known as epiphenomenalism, and is quite popular today. Chalmers endorses it. But Chalmers admits he isn't entirely confident about it. Now: if even a big-shot dualist like Chalmers isn't entirely confident about the truth of epiphenomenalism, what business do we have simply intuiting claims that presuppose its truth? Richard talks about what a super-genius would calculate, but the conclusions of super-geniuses seem an even poorer candidate for intuiting than most metaphysical issues (why bother with smart people if they can be replaced by intuitions?)* If nothing else shows the doubtful, question-begging nature of the zombie argument, this point should.

For more on this issue, I strongly recommend Siris' "Zombie Invasion" round-up. Especially the links there to the Brood Comb posts on epiphenomenalism. Oh, and be sure to check this out.

*As an aside: it matters a bit whether Richard means to say that the super-genius will know everything about a snap-shot of a world, or about it's causal processes and future as well. But it seems that "all there is to know" includes these things.

Tuesday, April 08, 2008

Zombie cage match

Two of my favorite bloggers, Richard Chapell and Eliezer Yudkowsky, recently duked it out over reductionism and zombies. (The zombie issue in a nutshell: supposedly, it's just obvious that we could have people physically identical to us but not conscious, so dualism is true). Richard started out with Arguing with Eliezer, part I and Arguing with Eliezer, part II, then Zombie Rationality. Eliezer carried his part of the discussion in Hands vs. Fingers, Zombies! Zombies?, Zombie Responses, and The Generalized Anti-Zombie Principle. The discussion mostly took place at Eliezer's, and Richard complained his opponents there had nothing but "mere ridicule and sloganeering."

In spite of agreeing with Richard about reductionism, in the narrow disputes that got raised, I'm with Eliezer. For one, the idea of conceivability Richard appealed to is weird--it's a sense which entails possibility, but if you take that use of the term, we have no defense against the worry that we often think we're conceiving of something when we are not, in fact, doing so. Yes, I realize an awful lot of philosophers have used "conceivability" in Richard's way, but just because philosophers do it doesn't mean its a good idea.

Richard's other line was to ask for a proof that zombies are impossible. But this is silly. A useful parallel case is that of "vital-force zombies," imaginary people physically identical to us only not alive. In the 19th century such an idea might have seemed possible, but the inability to provide some conceptual disproof didn't make vitalism right. Really, 'nuff said.

If you want to argue for dualism, appeals to esoteric possibilities don't do much good. The way that makes sense, I think, is to appeal to our direct acquaintance with consciousness, and point out that we have there something just not covered in our current theories of physics.

Thursday, March 27, 2008

Ethics and economics

I've seen a few post on the internets recently on the links between ethics and economics. Barefoot Bum discusses the prisoner's dilemma, Chicago Dyke talks experimental economics, and Robin Hanson of Overcoming Bias (that's the place where the amazing Eliezer Yudkowsky blogs) argues morality is overrated, and that moral philosophers need to pay more attention to economics.

I find it all worth reading, as perhaps the most interesting moral problem I've come across is economic in nature, or at least game-theoryish. One of the main positions in modern/contemporary ethics is consequentialism, traditionally understood as the idea that whether an act is right or wrong depends on the consequences. However, other variants have been proposed, such as rule-consequentialism: an act is right if it accords with rules that, if generally followed, would have good consequences.

One criticism of rule-consequentialism that's been around for awhile is that the rule-following is senseless, that it would have us follow rules that don't make any sense. I recently got my thinking on that question kick-started by reading the anthology Morality, Rules, and Consequences (previous notes here). Much of what's in the essays I find puzzling. Shelly Kagan, for example, goes on at some length about "evaluative focal points," ends up endorsing something that looks a lot like traditional "act" consequentialism, though she says people who've endorsed the view thinking themselves act consequentialists aren't really act consequentialists. Kagan makes good actions determined by consequences of the act itself, but also insists upon the existence of good rules, which have no clear relevance to our conduct.

The interesting defense of rule consequentialism, the more I think of it, is Riley's. As I described it in the notebook post: "Riley insists that some rules only produce good consequences as rules generally followed, and may not produce the best results in individual cases." His example is secretly killing for spare organs to save several times as many lives: it might produce the right consequences in occasional cases, but as a general practice (even a general practice in only the cases where it really has good consequences) it would undermine trust, having overall negative effects on society. If you're uncertain about this case, I suggest thinking about voting: one vote, it seems, makes no difference, so you can stay home at benefit to yourself and harm to no one, in that sense promoting the greater good. But if nobody votes, that's a bad thing. What are you to do?

The relevance of economics and game theory is that this is the sort of situation that economists try to model all the time, and they do so with great sophistication. They typically assume selfish players, but some of the problems of interest arise even in agents dominated by altruism: both the killing-for-organs case and the voting case can be set up in a way as to be entirely other-directed. It seems to me that if you really want to say something siginficant on these debates in ethical theory, you should try to make use of the very best economics and game theory we have.

Friday, March 14, 2008

Obama and transworld identity

This post in a nutshell: how a recent political flap teaches us to be cautious about a lot of things philosophers say.

The flap in question is Geraldine Ferraro's comment that "if Obama was a white man, he would not be in this position, and if he was a woman (of any color) he would not be in this position. He happens to be very lucky to be who he is. And the country is caught up in the concept." In case you care what I think about this remark, Mickey Kaus gives spot-on reasons to think it was at least not racist, though Ferraro could at least be condemned for failing to exercise what philosophers would call sufficient modal skepticism. I'm not much interested in whatever claims about race the comment may imply, and will likely ignore any comments disputing my position here. What I care about is the implied philosophical claims.

You see, many contemporary philosophers believe that there is this thing called the "Problem of Transworld Identity"--the question of how two people in different possible worlds could be the same person, "possible world" here indicating a fancy variant on the idea of a possible situation which philosophers often invoke when they have no good reason to. Peter van Inwagen has mocked this idea, suggesting it is analogous to the problem of transpropositional identity: how can "Nixon is a villain" and "Nixon is an honest man" refer to the same person, when one is about a villain, and one is about an honest man? Nevertheless, even van Inwagen agrees that there are serious questions in the general vicinity of this problem, such as (and I think he uses this specific example) whether Socrates could have been an alligator.

Now the relevance of Obama and Ferraro: many metaphysicians would be inclined to deny that Obama could possibly have been white. For example, Saul Kripke has advocated a thesis known as the essentiality of origin, according to which you could not possibly have originated in a different way than you did. What this all means is not entirely clear, but it at least means you would have had to have had the same parents. However, Obama could not have been white unless he had a different father, a white one. Ergo, if Kripke is right, Ferraro made a serious metaphysical mistake aside from any racism that was or wasn't present.

In the discussion of the flap, I have found one brief suggestion that a philosophical mistake might have been involved. Here's Ezra Klein:
After all, Obama is not a woman, nor a white man. He's who he is. To say that if he were different, things would be different is to say nothing at all.
So far so good, but then Klein lapses back into taking Ferraro's assumptions at face value (while advocating the modal skepticism I referred to above):
As a white woman, maybe he would have led a military coup and established himself dictator. Who knows!? Hell, if he were a slightly less inspiring speaker, or had an off-night at the 2004 Democratic National Convention, he wouldn't be in this position either. Similarly, if Hillary Clinton were a black man, it's unlikely that she would have been a national political figure for the past 15 years, as it's unlikely that she would have married another man from Arkansas, and unlikely that the country would have put an interracial, same sex couple in the White House. But so what? This is an election, not Marvel's "What If?" series.
Now for another what if: what if Kripke were to contact Ferraro or Kaus or Klein and try to explain their metaphysical mistake? I suspect they would laugh and begin telling their friends about how silly philosophers can be. None of them ever meant to be making metaphysical claims, what they were debating was roughly "would a white man of Obama's age, talents, accomplishments, etc. be a viable Democratic presidential candidate?"

What's especially curious is that though the people mentioned would probably explain themselves in a predictable way when pressed, they also probably never thought about the distinction. I may well be the first person on Earth to notice the metaphysical implications of Ferraro's comment. All of this, I suggest, entails the following: we sometimes say things that look like deep metaphysical claims, aren't deep metaphysical claims, and yet do so without really paying attention to whether or not we're making a profound metaphysical claim. This suggests it is at least possible that some well-known work on transworld identity turns on less obvious but similar mistakes, because philosophers notice something that looks like a profound metaphysical question but might not be, yet the possibility that it is not is not seriously considered. This seems to reinforce the Eliezer Yudkowsky quote I posted earlier this week:
Many philosophers - particularly amateur philosophers, and ancient philosophers - share a dangerous instinct: If you give them a question, they try to answer it.

Monday, March 10, 2008

What's a serious question?

Austin Cline has another post rebutting Amy Sullivan, and I agree with most of what he says. Amy Sullivan is on my mental list of noteworthy wankers. However, there's one thing I take issue with:
We have to make a distinction between the phrase "I have serious moral concerns about abortion" and "abortion is a subject which necessarily involves serious moral problems." If someone says the first, then I'll believe it is true for them.

The second, however, isn't a true statement. There can be cases where abortion poses serious moral questions, but not ever single instance of abortion does. The Christian Right benefits from a blurring of the distinction between the two because if they can get anyone to agree that any cases of abortion involve moral problems, they can quickly move to saying that abortion is inherently problematic. After getting agreement on the premise that abortion involves serious moral questions they then move to conclude that women can't make those moral decisions herself — and therefore they can't be permitted to legally chose to have an abortion.
Now Cline makes it quite clear that when he says "problems," he just means "questions." And I don't know how anyone could deny that abortion involves serious questions. Surely the question of when something becomes a person with rights is a serious question? They involve questions, just look at what people say, so what reason can be given for thinking the questions unserious? While you might have an easy base case in, say, the morning after pill, the development to a full baby appears gradual, so even that base case becomes entangled in some difficult questions. Finally, some serious philosophers have argued that abortion is always or almost always wrong. You may think you have the right answer to these questions, or that some opposing views are irrational, but how are the questions not even serious?

If abortion opponents have offered the argument Cline ascribes to them--and he provides no evidence they have--the problem is in moving from "there's a question" to "we have the right answer."

This is part of a larger problem I've noticed--people think that philosophical questions are highly restricted, so the statement "it's a philosophical question" can be casually used as an important premise in an argument (the ghostwriting for the recent Antony Flew book comes to mind). Philosophy, far from being narrow, is about as broad in analysis as it gets. Philosophical questions are everywhere. What we need to stop the inference from "it's a serious philosophical question" to "I'm right."

Friday, March 07, 2008

On authorities

Simple question: why do people find the concept of a fallacious appeal to authority difficult to grasp? Not long ago... (Contine reading at God is for Suckers!)

Tuesday, March 04, 2008

Against happiness (or at least the tendency to profess it on surveys)

prozac(Cross posted at God is for Suckers!)

I notice that as part of Internet Infidels' Great Debate, Jeffery Jordan tries to rehabilitate pragmatic arguments for belief by appealing to benefits in this life. The chief benefit listed is that studies supposedly show that religious people are happier. But how do they show this? If they're like most social-science studies, they simply ask people whether they're happy or not. If this is true, then what the studies show is not that religion makes people happy, but rather that religion gives people a propensity to tell survey-takers that they're happy.

The difference is obvious enough, but let me drive it home: it seems to be a fairly well-established finding that when asked by a social scientist, the average straight man will claim to have had six sexual partners, and the average straight woman will claim to have had one. Do the math. People lie on social science surveys.

So maybe religions have no positive impact whatsoever on people's state of mind. Maybe religious people, because there is an expectation that they will be happy, are simply inclined to say things they don't really believe.

This, of course, is not the only alternative interpretation of these studies. Maybe people's ideas of happiness are skewed. How many people in the U. S., do you think, have done serious thinking about the nature of happiness? And of those people, how many would you characterize as being such sound philosophical thinkers that you would be willing to accept on faith that their ideas about happiness are right?

Again, drive the point home: we know how to wire up animals' brains to deliver intense jolts of pleasure via electrodes. If given the ability to do so, they will self-administer these jolts to the exclusion of other activities. Would we characterize a person with such a setup as happy? Or, if memory serves, in his Philosophy for Dummies Tom Morris imagines a drug that allows someone to be in some sense contented as they divide their life between gang hit jobs and watching soap operas. Are they happy? Or, there's Robert Nozick's "experience machine," which is supposed to allow all kinds of great experiences without actually doing anything: would you plug in?

These issues aren't easy. Maybe the people in the experience machine aren't happy. Maybe they are, but the thought experiment shows happiness is not the be-all-end-all it's sometimes portrayed as. In any case, I won't trust a survey to answer the question.

Plausibly, people who feel some long-lasting, vaguely pleasurable feeling over the long term because of a false religious belief are like people plugged into Nozick's machine. As Carl Sagan said:
For me, it is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring.

Sunday, March 02, 2008

Academic Freedom: Who Needs It?

That was the title of a talk I attended last Thursday on campus. It was organized by Lester Hunt of our own philosophy department, and delivered by Richard T. De George of the University of Kansas. Though the talk ranged over a number of related issues, the title of the talk referred specifically to the issue of the question of who academic freedom is designed to benefit. De George argued it was designed to benefit not the professors, as one might think, but the general public. His central example was that of the USSR: professors learned to keep their mouths shut and avoid suffering personally, but they failed to produce new discoveries, greatly hurting the position of the USSR. Along the way I learned some useful tidbits of Soviet history: That Lenin had written a pamphlet titled "What is to be Done" arguing against intellectual freedom as early as 1901, and that Soviet pseudoscience extended well-beyond the bogus genetics of Lysenko, and to the denunciation of relativity and quantum mechanics as "bourgeois" (this changed when the Soviets decided they wanted to build an atom bomb).

De George described the arguments used by Lenin against intellectual freedom, and argued they had some plausibility at first glance. The problem, he said, was that the Bolsheviks didn't have the competence to evaluate what was good scientific theory. The government has no special access to truth. Adults are reasonable people, not needing government oversight of their beliefs.

He went on to describe an ideal of how the university should operate, conceding to an extent that his picture was idealized. Scholars were to be presumed competent as the ultimate authorities in their discipline, knowing better than outsiders what to teach. This independence, however, was not to be a barrier to accountability. He proposed that Universities could be judged on whether their graduates actually had valuable skills.

It was not terribly surprising when De George got around to arguing that not only is there no excess of academic freedom, in reality there's too little of it. He discussed a couple of well known cases: First was Larry Summers of Harvard, who was forced to resign for suggesting that the lack of women in science might be explicable by a difference in innate ability or interest. De Georges' said he had read Summers' remarks, and gave the impression that they were even milder than many news reports would suggest: Summers was merely suggesting a possible area of research.

The other controversy about a single person discussed was that of Ward Churchill, who made inflammatory remarks about the September 11th attacks. De George suggested the university had handled it on its own well enough. He also came out in opposition to the Academic Bills of Rights that David Horowitz has been promoting, saying that students have no right not to hear views with which they disagree. Finally, he addressed the claim that all knowledge is somehow subjective or politicized, making the obvious point that this view appears self-defeating.

Then came time for questions. One student asked about the appropriate response to a professor discussing the September 11th attacks in class (say, political science). The response was that the attacks make an excellent example for all kinds of issues, though if a professor spent fifty minutes haranguing his students, it might be grounds for a complaint to the department chair.

My question was about how realistic the assumption of adult reasonableness and scholarly competence was. I mentioned continental philosophy in literature departments. This prompted both De Gorge and Hunt to discuss a case that they both knew of where someone in a literature department had almost been denied tenure, not by people within his department, because "this postmodernism thing has had its day." Once the rationale was publicly known, it was taken as obviously illegitimate and the guy got tenure. Hunt commented that he thought the key thing was freedom of academic departments.

Other questions dealt with political correctness, diversity training, and how to teach critical thinking. The final couple of questions dealt with the question of what if someone outside a department knows that someone in the department is wrong. That part of the discussion seemed a little muddled and I tried to jump in, but there wasn't enough time.

At the end, I agreed with a lot of what De George said, though I'm skeptical of the idea of the department as sovereign, at least as a foundational principle. It suggests, for example, that say if a physics department were split into experimentalists and theoreticians, that would inherently change who ought to have power. It may be that in practice we can't do much better than letting each department tend to its own affairs, but I can't see the principle as sacrosanct.

Wednesday, February 27, 2008

Consciousness and understanding

Last fall, I wrote about being disappointed with John Searle's Chinese room thought experiment. In that post, however, I left out something else about Searle's ideas that have been bugging me: he talks about understanding (say, of a story told in Chinese) but seems to assume that there's some deep relationship between understanding and consciousness. Searle's last resort in dealing with reductionist approaches to the mind (and this is something I'm entirely sympathetic to) is to claim that they are simply ignoring the issue of consciousness. Searle shifts casually between the two issues without ever distinguishing between them, but it's quite clear that they're different.

Consciousness, in the sense that has everyone excited, is subjective experience. Understanding, on the other hand, is I know not what. When I hear something in English vs. a language I don't understand, there's a certain feeling that I understand it that isn't present with other languages. I'm not sure that is very deep, though. Certainly, I can call to mind an internal monologue providing a sort of commentary on the meaning, but such a monologue does not occur in the very moment I feel understanding, and indeed mentally thinking sentences to oneself takes some time. I wonder whether any single experience alone can indicate understanding. This might explain how people can think such ludicrously incoherent thoughts as "there is no such thing as truth." In such cases, perhaps, they have the sentence in their internal monologue, but they don't really understand it.

The sort of considerations I raised in the Chinese room case, especially the intuitions about variant cases, suggest to me this possibility: understanding has to do with the ability to integrate the thing with the rest of what's in your brain. The difference is not mainly in the initial experience, but in what your mind is able to do thereafter.

Wednesday, February 20, 2008

What is irrationality?

In the blog circles I run in, few doubt that irrationality is a major problem in the world. Forget religion: people's beliefs about how governments make life-and-death decisions are also a mess. Yet some people seem maddeningly oblivious to this: for example, over winter break I read an essay by philosopher Peter van Inwagen claiming it was just obvious that politics is full of rational disagreement, and even claiming a consensus for the idea that it isn't irrational to hold political beliefs on insufficient evidence.

Someday I will write good-sized essay about the irrationality of politics. Now, though, I think it's just worth stepping back and trying to say what irrationality is. But I know enough philosophy not to try to give a final, once and for all definition, which will give the correct answer in all situations we might devise (if you haven't learned why this doesn't work yet, read this). No, instead I'm just going to throw out some rough ideas, some kind of groundwork to build on. It's not much, but it's worth getting these things clear.

First of all, irrationality, applied to beliefs, means something is wrong with your beliefs. But it's not that they're false. People in ancient Babylonia who believed that the Earth was flat are just as wrong about the facts as modern people who believe the same. However, modern flat-Earthers were irrational while the Babylonians weren't, or at the very least the Babylonians weren't irrational to the same extent. It is not entirely implausible to think that the Babylonians would have been irrational to believe the Earth round. A true belief can be irrational.

On the other hand, that something wrong with an irrational belief isn't totally disconnected from truth. It isn't about pragmatic rationality--what will make you feel good, or make people like you, or stop the evil AI program from torturing you (or the evil God, for that matter). I realize some will dispute the claim that it is always irrational to believe things on pragmatic grounds, but at the very least letting pragmatic reasons guide us too much risks leading us into irrational beliefs.

Irrationality is about getting at the truth. When we don't have direct access to the truth, we can at least adopt methods more likely to give us it. Failure to do so is irrationality, or at least a component of it.

The flat-Earth example suggests irrationality is related to being clearly wrong. Modern flat Earthers are clearly wrong, the ancient Babylonians were wrong, but not clearly so. This gets some interesting support from attempts to define what a psychiatric delusion is (from Wikipedia):
Although non-specific concepts of madness have been around for several thousand years, the psychiatrist and philosopher Karl Jaspers was the first to define the three main criteria for a belief to be considered delusional in his book General Psychopathology. These criteria are:

* certainty (held with absolute conviction)
* incorrigibility (not changeable by compelling counterargument or proof to the contrary)
* impossibility or falsity of content (implausible, bizarre or patently untrue)

These criteria still live on in modern psychiatric diagnosis. In the most recent Diagnostic and Statistical Manual of Mental Disorders, a delusion is defined as:

"A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everybody else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith)."
I have doubts about whether there is a clear difference between psychiatric delusion and normal irrationality, but even if I'm wrong about that, there's some relation. A common-sense way of looking at things is that delusions are a form of extreme irrationality. The common sense view would say that even if religious beliefs are irrational, they are not on the same level as a delusion. The DSM would suggest that the presence of the community makes them qualitatively different. Actually, I'm not sure I disagree with that: perhaps the difference between ordinary irrationality and much of what's classified as delusional is that ordinary people need to have their irrational beliefs socially reinforced.

In spite of these suggestions, a belief can be irrational even if it's not clearly false. A belief can be irrational simply because you really have no idea whether it's true. It would be irrational for me to take a passionate stand on whether the number of stars in the universe is even or odd.

If I had to take a stab at what it means to be irrational, here's what I'd do: I'd try to combine two strands of thought about knowledge in modern epistemology: intenralism and externalism. Internalism says the rationality part of knowledge is entirely a matter of things the subject has access to. Externalism says it's a matter of things the subject doesn't have access to: something in the ballpark of "the reliability of your methods." I'd suggest an compromise, an inxternalism if you will, where it's a matter of the methods that you have access to. Someone who believes their eyes, having no way of knowing their eyes deceive them, isn't irrational. But someone who forgoes available methods of rational inquiry in favor of sophistry is irrational. That's why it's important to do your best to be rational, at least if you want to get at the truth.

Monday, February 18, 2008

Solved Philosophy

Richard has a post on Solved Philosophical issues. I propose one: just because a grad student thinks a philosophical issue is solved doesn't mean it's solved, based on looking at Richard's list. For example:
1. Knowledge does not require certainty. But nor does justified true belief suffice.
The idea that justified true belief doesn't suffice has come under assault lately. For one, people in Hong Kong don't seem to have the relevant intuitions. Even working within more traditional considerations, I just read a paper suggesting maybe we should hold onto the JTB theory, in spite of contrary intuitions, because the theory explains so much ("What are good counterexamples?" in 2003 Philosophical Studies).

Of course, we could consider a revised sense of settling a philosophical issue, where it's just like letting the dust settle: it could get kicked up again 30 seconds from now. This is a pretty good model for what's happened with the JTB theory: Gettier's attack sidelined it for decades, but in the last decade or so people have begun to be skeptical. Something similar has happened with non-reductive physicalism, I understand.

Another useful example:
8. It's not analytic (true by definition) that cats are animals. But it is metaphysically necessary: there is no possible world containing a cat that is not an animal.
This is based on some philosophy of language claims which I've complained a good deal about on this blog. This raises a new question: if 95% of philosophers take a view, is the issue settled? But if that's what settling means, how much should a young philosopher like myself care? Not very much, I'm inclined to think.

The most interesting category is the last: "more controversial" examples of settled philosophy: WTF? How is something controversial settled? This suggests an equivocation at the heart of the post: between the "well supported" and the things that are, in a more natural sense, settled (say, mostly uncontroversial).

Friday, February 15, 2008

What we say and what we mean

Last week Tuesday in class we were discussing reference and possible worlds. I asked what a possible world is--superficially, I felt comfortable with the concept, but I knew that there was debate. I knew about Lewis' crazy view on which every possible world is actual. I knew of the view that they are propositions providing a complete description of a world. The professor had other suggestions, such as that they were collections of properties, but at the end of the day yes, were were discussing the relationship between sentences and propositions. At the end of the class, I began to have a new worry which I didn't quite feel like telling the teacher about: I wasn't sure what a proposition was!

For a philosophy major, this constituted a minor existential crisis. Debating the status of propositions is what philosophers due, so if you're a philosopher, you'd better know what a proposition is. I had gotten comfortable with the word some time ago. It wasn't like going into a science class and being confused by a bit of terminology that you were supposed to have learned the day you missed. Suddenly, though, I realized I didn't know what the word meant.

The Stanford Encyclopedia of Philosophy ended up coming to my rescue: among the proposed ideas of what a proposition was was "the meanings of sentences." Suddenly, the last philosophy class made sense: we had been discussing the relationship between what we say and what we mean. This simple fact had been obscured by a mess of jargon, and I think some more substantial points were also obscured by this in retrospect.

The great irony turns out to be this: arguably, my professors' use of that jargon was a failure by him (and the philosophical community he took his cues from) to say what he meant.

Monday, February 11, 2008

The time-traveling kidnappers.

Another metaphysics post, this time getting away from Kripke and language:

One example introduced on the first day of the metaphysics class I'm taking was of a 17 year old slacker who goes off, spends three years in the military, and as a result changes significantly in character, is more disciplined, etc. When he gets back, he reports, "I'm a different person now." Philosophers, being philosophers, may be inclined to ask in this situation whether this might literally be true.

It sounds silly, but I've thought of another thought experiment that suggests something significant may be going on here. Imagine a person is kidnapped by time travelers, whether from another planet or a future Earth it matters not. In addition to time travel, their technology includes great medical and genetic technologies, which allow the subject to be kept alive indefinitely. Over a course of ten thousand years, he goes on many adventures, entirely forgetting his original life for all practical purpose, and his genetic code is slowly altered, one tiny insignificant bit at a time, until it is unrecognizable. Maybe he has some vague ideas about what it was like to live in his home milieu, and maybe his genetic code could be recognized as originally human, but both memories and genetics have been altered (slowly, over the course of ten thousand years, remember!) to the point where he could never be identified with a single individual.

To complete the thought experiment, imagine that somehow the time traveling technology finally places him back in his home setting, an hour or two after the abduction. Would it make any sense whatever for the subject's previous friends and family to treat him as in any way the same person? I think not. I dare say he would not be the same person. This is in spite of the fact that the example is different only in extent from the military case, and I have stipulated that the change is completely gradual. It is interesting to think that the ex-slacker will likely not interact with many of his old friends in quite the same way, especially if they themselves have remained slackers. The time traveler is only a more extreme form of that.

Linguistic intuitions and memory

Many students of philosophy, myself included, are convinced that at the end of the day philosophical arguments just have to appeal to intuition. They have to stop somewhere, and it's hard to formulate a totalistic theory of where they can stop, so lets stop wherever call our arbitrary stopping points intuitions. Maybe a given intuition will be undermined, but appeals to them are hard to rule out in advance.

The standard inclination is to think that intuitions must be a priori, but I think I have a counter-example to that: linguistic intuitions, intuitions about the meanings of words. Things like our intuitions about the whether the sentence "Hesperus might not have been Phosphorus" is a legitimate, meaningful sentence. Though it's hard to further justify an intuition about this case, it plainly isn't purely a priori: the only way we can know what an English sentence means at all is by experience with English. Such intuitions appear to be a form of memory about how we've heard words used, just a very vague form. We are unlikely to be able to cite specific instances, much less quote them verbatim. Not terribly surprising, as research on the psychology of memory has made perfectly clear that memory isn't a video camera, that it's often quite vague. However, it's interesting to see how far the vagueness can be taken, to the point of giving us something that might at first glance be mistaken for an a priori intuition.

Incidentally, this may explain how Galileo was able to use a thought experiment to figure out that Aristotle's ideas about gravity were wrong (discussion here). It wasn't an a priori deduction, but rather a combination of realizing the logical consequences of Aristotle's and having a vague idea that he (Galileo) hadn't seen things fall that way.

Kripke on speaking loosely

If you asked someone immersed in contemporary Anglophone philosophy who the greatest living philosopher is, an awful lot of them would say Saul Kripke. Surprisingly, Kripke has published very little, concentrating on giving lectures. In one case, a series of lectures was deemed important enough to be transcribed and published, thus giving us one of Kripke's main publications, Naming and Necessity.

One of Kripke's central theses is that names, and many terms, act as "rigid designators" referring to the same thing in all possible situations. It's hard to see the significance of this until you see some examples. One example is that Kripke claims that Hesperus and Phosphorus, two names given to Venus at different times when it appears in the night sky before people knew the two things were the same object, refer to Venus in all possible situations. There is no possible situation in which the words refer to a separate object (there are possible situations in which the words are used differently, but when we use the words in our situation, we refer only to Venus, not some other possible objects).

Now, my gut reaction to this is "Hesperus might not have been Phosphorus" is a perfectly sensible, meaningful thing to say, and Kripke actually uttered sentences along those lines in Naming and Necessity. To his credit, Kripke notices this problem and at the end of the lectures imagines a hypothetical critic rattling off a string of such counter-examples. To this he responds (on p. 142 of my edition) that "when I speak of the possibility of the table turning out to be made of various things [another of Kripke's examples--CH], I am speaking loosely." If not for Kripke's ideas about speaking loosely, the counter-examples go through, and his project fails. Yet it strikes me at first glance as a hand-wave, and at second glance as an instance of the sort of misguided approach to language I've criticized several times before on this blog, such as in can water taste funny. The idea--and its proponents would surely try to find something more nuanced--is that words cannot mean multiple things. Applied to Kripke, it is the idea that "Hesperus" cannot mean "Venus" sometimes and "the object that appears in the sky at such and such times and locations with such and such brightness" at other times, with frequent equivocation between the two senses. More precisely, the assumption is that the two senses cannot be equally legitimate. One has to be dismissed as mere loose speech, but the grounds for this aren't clear.

Brief Google scholar searches yield nothing on this idea of loosely speaking. I wonder if anyone has ever commented on it, and if so, what has been said in defense of it. It certainly needs more exposition than Kripke has given it.

Sunday, February 10, 2008

Red green blue

Enigmania has been posting on the sky's being blue, including a post titled The sky's blue, therefore it's an object. Oddly, that title may be an important insight in and of itself. "Colored things are objects" seems like a paradigm case of a necessary truth, knowable a priori, yet the sky seems not to fit many of our common-sense ideas of what objects are. It certainly doesn't fit the common-sense assumptions that used to be made of it, namely that it's a solid dome.

This catches my interest, partly because I've been reading Laurence Bonjour's In Defense of Pure Reason, a modern defense of the idea that there are substantiative a priori claims. That in and of itself seems reasonable enough, as I don't see any alternative to that except Humean skepticism, and I'm pretty clearly convinced that skepticism is self-defeating. However, one of Bonjour's main examples bothers me: the idea that no object can be both red and green all over. Scientifically-minded person that I am, I immediately reflected on the fact that the property of the object is to emit light in a certain way, and what we think of as the redness or greenness of the object is a product of how our minds interact with the object. It is possible for the light-waves to be overlaid, though our minds wouldn't give us simultaneously red and green experiences in that case--I think it would be more of a yellowish orange (I'm trying to remember here what it's like to see green and red diodes placed very close together). Now, might we have an experience of something that seems to be simultaneously red and green? I honestly don't know. It's inconceivable in so far as I cannot imagine it, but that doesn't mean its impossible.

This isn't a huge blow to Bonjour's specific ideas--he emphasizes that apparently a priori knowledge can be fallible. Still, it's interesting to think about.

Sunday, February 03, 2008

Abraham Lincoln, founding father

This continues my musings on metaphysics, though I suppose it's really more philosophy of language. Those trying to figure out what's going on here may want to look at the comments on the previous post in the series (see above link), and maybe even check out the SEP article on rigid designators. Rigid designators in a nutshell: many names refer to one thing in all possible worlds, "gold," for example always refers to the element with atomic number 79, never something else sharing gold's superficial properties.

It seems on the face of it that the person who believes "Abraham Lincoln was the third president of the United States" has a false belief about Abraham Lincoln. But consider the person who believes the following:
Abraham Lincoln was a Virginian statesman born in the 18th century. He drafted the Declaration of Independence in 1776, and went on to become a U.S. ambassador to France and the third president of the United States. He had a strong break with some of his fellow founders on a number of issues, including the French Revolution. Though he had doubts about the morality of slavery, he owned slaves all his life.
Now, does such a person have a false belief about Abraham Lincoln, or a false belief about Thomas Jefferson?

The general puzzle here is that we think we have both linguistic knowledge and knowledge about the way the world is, but we tend to learn the two things at the same time and our teachers may not take pains to explain the difference. The situation is particularly bad with jargon-heavy fields of knowledge. This allows for even stranger examples, such as the student who thinks in general "hept" is the Greek root for "six," and therefore returns a wealth of information about hexane when asked about heptane.