Wednesday, April 29, 2015

Brains and animalism

Animalists hold that we are animals. It is widely accepted by animalists that if a brain were removed from a body, and the body kept alive, the person would stay with the bulk of the body rather than go with the brain.

I wonder how much of the intuition is based on irrelevant questions of physical bulk. Imagine aliens who are giant brains with tiny support organs—lungs, heart, legs, etc.—dwarfed by the brain. I think we might have the intuition that if the brain were disconnected from the support organs, the animal would go with the brain. In the case of beings that dwarf their brains, it feels natural to talk of a certain operation as a brain transplant. But in the case of beings that are almost all brain, the analogous operation would probably be referred to as a support-system transplant. Yet surely we should say exactly the same thing metaphysically about us and the aliens, assuming that the functional roles of the brains and the other organs are sufficiently similar.

This isn't a positive argument that we'd go with our brains. It's just an argument to defuse the intuition that we wouldn't.

What about cerebra? Here's a widely shared intuition. If the cerebrum is removed from the skull of an animal and placed in a life-support vat, the animal stays with the rest of the body.

But now suppose that we granted that the animal goes with the whole brain. Let's say, then, that I am an animal and sadly become a brain in a life-support vat, losing the rest of my body. Suppose that next my brain is cut and the upper and lower brains are placed in separate life-support vats. It does not seem particularly plausible to think that the animal goes with the lower brain. (Maybe the animal dies, or maybe it goes with the upper brain.) So once we've granted that the animal would go with the brain, the primacy of the lower brain for animal identity seems somewhat undermined.

Maybe, though, one could accept both (a) the common intuition that if the cerebrum were removed the human animal would go with the rest of its body, and (b) my intuition that if the human animal were first reduced to a brain, and the brain then cut into the cerebrum and lower brain, the animal would go with the cerebrum. There is no logical contradiction between these two intuitions. Compare this. I have a loaf of bread. Imagine the loaf marked off into five equally sized segments A, B, C, D and E. If I first cut off the 2/5 of the loaf marked D and E, it's plausible that the loaf shrinks to the ABC part, and DE is a new thing. And then if I cut off C, the same loaf shrinks once again, to AB. On the other hand if I start off by cutting off the AB chunk, the loaf shrinks to CDE. So the order of cutting determines whether the original loaf ends up being identical to AB or to something else. (We can also make a similar example using some plant or fungus if we prefer a living example.) Likewise, the order of cutting could determine whether the animal ends up being just a cerebrum (first remove brain, then cut brain into upper and lower parts) or whether it ends up being a cerebrumless body.

We might have a rough general principle: The animal when cut in two tends to go with the functionally more important part. Thus, perhaps, when the human animal is cut into a brain and a rest-of-body, it goes with the brain, as the brain is functionally more important in the brainier animals. When that brain is subsequently cut into upper and lower brains, the brainy animal goes with the upper brain, as that's functionally more important given its distinctively brainy methods for survival. On the other hand, if the human animal is cut into a cerebrum and a cerebrumless-rest-of-body, perhaps (I am actually far from sure about this) the animal goes with the cerebrumless-rest-of-body, because although the upper brain is more important functionally than a lower brain, the lower brain plus the rest of the body are collectively more important than the upper brain by itself. So the order of surgery matters to identity.

Self-ownership and organ sale

  1. Things owned can be permissibly traded, barring special circumstances.
  2. Trade in persons is never permissible.
  3. Thus, no one owns a person. (By 1-3)
  4. Thus, no person owns herself. (By 4)

(By the same argument, God doesn't own us, either. We belong to God, of course, but not by way of ownership.)

Let's continue thinking about self-ownership:

  1. If x is not simple and I own every proper part of x, I own x.
  2. I don't own myself. (By 4 and as I am a person)
  3. I am not simple.
  4. So, there is a proper part of me that I don't own. (By 5-7)
  5. All my proper parts are on par with respect to my ownership of them.
  6. So, I don't own any of my proper parts. (By 8-9)
While I think the conclusion of this argument is true, I am less convinced by it than by the earlier argument. I think 9 is not completely convincing given dualism: spiritual parts perhaps aren't on par with physical. I am far from sure about 7. And I could see ways of questioning 5. Still, it's an argument worth thinking about.

Suppose the argument is correct. Then we have a further interesting argument:

  1. My organs are proper parts of me.
  2. It's wrong or impossible for me to sell what I don't own.
  3. So it's wrong or impossible for me to sell my organs. (By 10-12)
While I am sympathetic to the conclusion, I worry that this argument may equivocate on "organs". Aristotle says that a severed finger is a finger in name alone. Perhaps 11 is true of a kidney as it is found in me, but once the kidney is removed from me, the kidney perishes and a new kidney-like object—a kidney only in name—comes into existence. The kidney-like object is not a part of me, and it is this kidney-like object that is being sold, not the kidney that was a part of me. Still, this isn't clear: maybe the kidney that was a part of me is what is sold, since it is for the loss of it that I am being compensated if "I sell my kidney."

More worryingly, if the above argument were sound, it seems it would be sound with "organs" replaced by "hair". But it doesn't seem wrong or impossible for me to sell my hair. Perhaps, though, we should modify 9 to read:

9*. If I own any one of my living proper parts, I own all my living proper parts and a fortiori all my non-living proper parts.
Then the conclusion is weaker than 10:
10*. I don't own any of my living parts.
This could allow me to sell my hair and some gold atoms in my body, but not my kidney.


Free will is incompatible with (causal) determinism, and I know it. I know it because I have sound arguments for it, with compelling premises. It is good for people to know the truth about things that matter, and this is one of them. So I should be glad, for your sake and not just out of vanity, if I convinced you by one of these compelling arguments. And I would be glad.

But perhaps I shouldn't be glad if I convinced everyone, and that's for two reasons. First, there actually being compatibilists helps keep incompatibilist investigators honest and leads to a deeper understanding of the ways in which determinism precludes free will. Second, while I know that freedom is incompatible with determinism, I might be wrong. That chance is sufficiently small that it's safe for me and you to risk the cost of being wrong. But the cost of everyone getting this wrong is more than the sum of the costs of the individuals getting it wrong. Once something becomes near universally accepted, it is much harder for humankind to retreat from it.

Thus, while I want to convince you of incompatibilism, I also want there to be dissent in the epistemic community. This is something like a tragedy of the commons in the epistemic sphere.

Fortunately, human nature is such that I run only an insignificant risk of getting everyone to agree with me when I offer an argument for incompatibilism. So I can offer the arguments safely.

I chose the example of incompatibilism carefully. I wouldn't say the same thing about things that I am much more confident of, say that there is a physical world or that 2+2=4. There the risk of being wrong is so small and the level of unreasonableness in denying the claim are sufficiently high that it would be good for the epistemic community to have universal agreement. On the other hand, there are philosophical doctrines which I think are likely to be true, but where I am sufficiently unsure that I would cringe if I convinced someone.

Tuesday, April 28, 2015

A quick argument for the bijection principle

The bijection principle says that if we have two sets A and B and we can pair up all the objects of the two sets, then the the sets have the same number of members.

Some people don't like the bijection principle because it leads to the counterintuitive conclusion that there are as many primes as natural numbers.

Here's an argument for the bijection principle. Let's run the argument directly for the above controversial case—that should be enough of an intuition pump to get the general principle. Take infinitely many pieces of paper that are red on one side and blue on the other. Number the pieces of paper 1,2,3,..., putting the numerals down on the red sides. Then on the piece of paper numbered n on the red side, write down the nth prime on the blue side. Then:

  1. There are just as many natural numbers as red sides.
  2. There are just as many red sides as blue sides.
  3. There are just as many blue sides as prime numbers.
  4. So, there are just as many natural numbers as prime numbers.
It's very hard to deny that 4 follows from 1-3, and it's very hard to deny any of 1-3.

Monday, April 27, 2015

Two kinds of authority

A president exercises authority with respect to a group that includes herself. She is included in two ways. First, the common good that her authority promotes is a good of a group that includes herself. Second, her legal enactments bind her just as much as they bind other citizens.

Not all authority is like this. A Dean of Students, for instance, exercises authority over students and for the sake of their good, and typically is not a student herself.

Sometimes authority of the second sort derives from authority of the first sort. The Dean of Students has an authority deriving from the consent of the students, and the students have an authority of the first sort over themselves. The general has an authority of the second sort (I think), deriving from that of the president.

An interesting hypothesis is that the second sort of authority always derives from the first. There are two nice test cases: parental and divine authority.

Parental authority is of the second sort. Is it derivative? If not, then the hypothesis is false. Maybe parental authority derives from the authority of one or both parents over the whole family, which would be of the first type? Or from God's authority?

God's authority is surely non-derivative and yet seems to be of the second type, yielding a counterexample. But maybe God's authority is of the second type: God is trying to put together a kingdom of ends that he is head of.

Or maybe the first sort of authority derives from the second? That could make for a neat story, with divine authority on top.

Or maybe there is no interesting derivation relationship.

Friday, April 24, 2015

Blackmail, promises and self-punishment

I was reading this interesting paper which comes up with "blackmail" stories against both evidential and causal decision theory (CDT). I'll focus on the causal case. The paper talks about an Artificial Intelligence context, but we can transpose the stories into something more interpersonal. John blackmails Patrick in such a way that it's guaranteed that if Patrick pays up there will be no more blackmail. As a good CDT agent, Patrick pays up, since it pays. However, Patrick would have been better off if he were the sort of person who refuses to pay off blackmailers. For John is a very good predictor of Patrick's behavior, and if John foresaw that Patrick would be unlikely to pay him off, then John wouldn't have taken the risk of blackmailing Patrick. So CDT agents are subject to blackmail.

One solution is to add to the agent's capabilities the ability to adopt a policy of behavior. Then it would have paid for Patrick to have adopted a policy of refusing to pay off blackmailers and he would have adopted that policy. One problem with this, though, is that the agent could drop the policy afterwards, and in the blackmail situation it would pay to drop the policy. And that makes one subject to blackmail once again. (This is basically the retro-blackmail story in the paper.)

Anyway, thinking about these sorts of cases, I've been playing with a simplistic decision-theoretic model of promises and weak promises—or, more generally, commitments. When one makes a commitment, then on this model one changes one's utility function. The scenarios where one fails to fulfill the commitment get a lower utility, while scenarios where one succeeds in fulfilling the commitment are unchanged in utility. You might think that you get a utility bonus for fulfilling a commitment. That's mistaken. For if we got a utility bonus for fulfilling commitments, then we would have reason to promise to do all sorts of everyday things that we would do anyway, like eat breakfast.

This made me think about agents who have a special normative power: the power to lower their utility function in any way that they like. But they lack the power to raise it. In other words, they have the power to replace their utility function by a lower one. This can be thought of in terms of commitments—lowering the utility value of a scenario by some amount is equivalent to making a commitment of corresponding strength to ensure that scenario isn't actualized—or in terms of mechanisms for self-punishment. Imagine an agent who can make robots that will zap him in various scenarios.

Now, it would be stupid for an agent simply to lower his utility function by a constant amount everywhere. That wouldn't change the agent's behavior at all, but would make sure that the agent is less well off no matter what happens. However, it wouldn't be stupid for the agent to lower his utility function for scenarios where he gives in to blackmail by agents who can make good predictions of his behavior and who wouldn't have blackmailed him if they thought he wouldn't give in. If he lowers that utility enough—say, by making a promise not to negotiate with blackmailers or by generating a robot that zaps him painfully if he gives in—then a blackmailer like John will know that he is unlikely to give in to blackmail, and hence won't risk blackmailing him.

The worry about the agent changing policies and thereby opening oneself to blackmail does not apply on this story. For the agent in my model has only been given the power to lower his utility function at will. He doesn't have the power to raise it. If the agent were blackmailed, he could lower his utility function for the scenarios where he doesn't give in, and thereby get himself to give in. But it doesn't pay to do that, as is easy to confirm. It would pay for him to raise his utility function for the scenarios where he gives in, but he can't do that.

An agent like this would likewise give himself a penalty for two-boxing in Newcomb cases.

So it's actually good for agents to be able to lower their utility function. Setting up self-punishments can make perfect rational sense, even in the case of a perfect rational agent, so as to avoid blackmail.

Wednesday, April 22, 2015

System-relativity of proofs

There is a generally familiar way in which the question whether a mathematical statement has a proof is relative to a deductive system: for a proof is a proof in some system L, i.e., the proof starts with the axioms of L and proceeds by the rules of L. Something can be provable in one system—say, Euclidean geometry—but not provable in another—say, Riemannian geometry.

But there is a less familiar way in which the provability of a statement is relative. The question whether a sentence p is provable in a system L is itself a mathematical question. Proofs are themselves mathematical objects—they are directly the objects in a mathematical theory of strings of symbols and indirectly they are the objects of arithmetic when we encode them using something like Goedel numbering. The question whether there exists a proof of p in L is itself a mathematical question, and thus it makes sense to ask this question in different mathematical systems, including L itself.

If we want to make explicit both sorts of relativity, we can say things like:

  1. p has (does not have) a proof in a system L according to M.
Here, M might itself be a deductive system, in which case the claim is that the sentence "p has (does not have) a proof in L" can itself be proved in M (or else we can talk of the Goedel number translation of this), or M might be a model in which case the claim is that "p has a proof in L" is true in that model.

This is not just pedantry. Assume Peano Arithmetic (PA) is consistent. Goedel's second incompleteness theorem then tells us that the consistency of PA cannot be proved in PA. Skipping over the distinction between a sentence and its Goedel number, let "Con(PA)" say that PA is consistent. Then what we learn from the second incompleteness theorem is that:

  1. Con(PA) has no proof in PA.
Now, statement (2), while true, is itself not provable in PA.[note 1] Hence there are non-standard models of PA according to which (2) is false. But there are also models of PA according to which (2) is true, since (2) is in fact true. Thus, there are models of PA according to which Con(PA) has no proof and there are models of PA according to which Con(PA) has a proof.

This has an important consequence for philosophy of mathematics. Suppose we want to de-metaphysicalize mathematics, move us away from questions about which axioms are and are not actually true. Then we are apt to say something like this: mathematics is not about discovering which mathematical claims are true, but about discovering which mathematical claims can be proved in which systems. However, what we learn from the second incompleteness theorem is that the notion of provability carries the same kind of exposure to mathematical metaphysics, to questions about the truth of axioms, as naively looking for mathematical truths did.

And if one tries to de-metaphysicalize provability by saying that what we are after in the end is not the question whether p is provable in L, but whether p is provable in L according to M, then that simply leads to a regress. For the question whether p is provable in L according to M is in turn a mathematical question, and then it makes sense to ask according which system we are asking it. The only way to arrest the regress seems to be to suppose that at some level that we simply are talking of how things really are, rather than how they are in or according to a system.

Maybe, though, one could say the following to limit one's metaphysical exposure: Mathematics is about discovering proofs rather than about discovering what has a proof. However, this is a false dichotomy, since by discovering a proof of p, one discovers that p has a proof.

Tuesday, April 21, 2015

Deep Space Nine in Minecraft

My big kids and I are Deep Space 9 fans. Here's a Deep Space 9 station rendered in Minecraft using our modifications to Martin O'Hanlon's rendering script from a mesh by Joerg Gerlach. For more on python and Minecraft, see here.

Monday, April 20, 2015

Escaping infinitely many arrows

Suppose infinitely many thin arrows are independently shot at a continuous target, with hitting points uniformly distributed over the target. How many arrows would we need to shoot to make it likely that the center of the target has been hit?

Given finitely or countably infinitely many arrows, the probability that the center will be hit is zero. But what if there are as many arrows as points in the continuum? And what if there are more?

I don't know of a good mathematical model for these questions. Standard mathematical probability is defined up to sets of measure zero, and this makes it not useful for answering questions like this. Questions like this seem to make sense, nonetheless, thereby indicating a limitation of our mathematical models. But perhaps that is a mere seeming.

Saturday, April 18, 2015

Bigger and smaller infinities

Anecdotal data suggests that a number of people find counterintuitive the Cantorian idea that some infinities are bigger than others.

This is curious. After all, the naive thing to say about the prime numbers and the natural numbers is that

  1. while there are infinitely many of both, there are more natural numbers than primes.
For the same reason it is also surely the obvious thing to say that
  1. while there are infinitely many of both, there are more real numbers than natural numbers.
So there is nothing counterintuitive about different sizes of infinity. Of course, (1) is false. Our untutored intuitions are wrong about that case. And that fact should make us suspicious whether (2) is true; given that the same intuitions led us astray in the case of (1), we shouldn't trust them much in case (2). However, the fact that (1) is false should not switch (2) from being intuitive to being counterintuitive. Moroever, our reasons for thinking (1) to be false—namely, the proof of the existence of a bijection between the primes and the naturals—don't work for (2).

All in all, rather than taking (2) to show us how counterintuitive infinity is, we should take (2) to vindicate our pretheoretic intuition that cardinality comparisons can take us beyond the finite, even though some of our pretheoretic intuitions as to particular cardinality comparisons are wrong.

Friday, April 17, 2015

Living in the moment, literally

Jim lives for a minute. Then he activates the time-and-space machine in his backpack, and travels to position one minute back and one meter back. Then the story repeats, giving Jim a lifespan of 80 internal years, all contained within a single minute of external time.

We could shorten that minute of external time to a second, or to any non-zero length of time, by making him jump back in time even faster.

Bold Hypothesis: We could shorten it to zero.

This works most easily if Jim is made out of ghostly matter that can overlap itself (nothing absurd about this: two photons can be in the same place at the same time), and as we shorten the time interval, we shorten the spatial distance of the jump.

The Bold Hypothesis basically says that just as one can have a time-travel machine, one can have a time-non-travel machine that keeps one in the same place in external time for all one's life.

Given the possibility of time travel, and the possibility of discrete time, it's not hard to argue for the Bold Hypothesis. Suppose at each instant of time, Jim can set the time-machine to determine where he will be in the next internal instant. Then why couldn't he set it so that in the next internal instant he will be at the same external instant as he is now.

Given the Bold Hypothesis, Jim would have a lifespan of 80 internal years, all in one moment.

All this suggests that when thinking about time, we should be careful with moving from our subjective experience of time and change--which Jim would have in his all-at-one-moment life--to claims about what external time is like.

Thursday, April 16, 2015

When would a computer feel pain?

Whether a computer could feel pain shouldn't depend on fine detail of how the CPU synchronization works or whether the CPU is implemented with electricity, or light, or gears. It's only the computational characteristics that matter if computers can be conscious.

Let's imagine a simplified picture of a computer's synchronization. There is a synchronizing clock. Each time the clock ticks, the computer very quickly executes the next instruction and enters a new state. Then it stays in its new state until the next clock tick.

Let's imagine that the dynamic stuff that is triggered by each clock tick takes places over a small portion of the time between ticks—most of the time between ticks, the computer is staying in a static state. For instance, if the computer is made up of gears, as a computer could well be (though it would be impractically big), then the picture is this. The computer is still for a while. Then the clock ticks. The gears make a quick movement to a new configuration. And then the computer is still until the next tick.

Suppose the computer feels pain. When does it do that? It is hard to believe that the computer is feeling pain when it is statically maintaining its state in between ticks. Suppose the computer got stuck between ticks—the clock broke down. Would the computer be permanently in pain?! I guess I just find it incredible that a contraption made of gears should feel something when the gears are not even moving.

So I think the best candidate for when the computer would feel pain would be when it is transitioning between states. But remember the intuition that only the computational characteristics matter to things like pain if computers can be conscious—the details as to implementation should be irrelevant. Assuming it's possible for a computer to feel pain, we should thus be able to have a possible world with a computer in pain whose state transitions are instantaneous. For a nanosecond all is still. Then a clock ticks. The computer instantaneously jumps to a new state. Another nanosecond of stillness. And so on.

The computer in this story, then, feels pain only during a series of instants. The total amount of time it spends feeling pain is zero. Yet it feels pain. Is that possible? Can there be pains that take no time at all?

Perhaps, though, the computer's subjective time doesn't line up with objective time. Maybe objectively the pains take zero time, but subjectively they take a lot more time? I don't know if this is possible.

There may be an argument against the possibility of computers feeling pain in the vicinity. In any case, there are interesting questions here.

Note: When I talk of a computer feeling pain, I mean a merely material computer. As Swinburne has pointed out to me, God could give a soul to a computer. And then there could be consciousness. But the subject of the pain, I think, wouldn't be the computer. The computer would be like a body. My body never feels pain. It is I (the whole of which the body is a part) who feel the pain.

Wednesday, April 15, 2015

A paradox about prediction of belief

Sally is perfectly honest, knows for sure whether there has ever been life on Mars (she's just finished an enormous amount of NASA data analysis), and is a perfect predictor of my future beliefs. She then informs me that she knows what I will believe at midnight about whether there was once life on Mars, and she further informs me that:

  1. There was once life on Mars if and only at midnight tonight I will fail to believe that there was once life on Mars.
Moreover, I know that:
  1. I won't get any other evidence relevant to whether there was once life on Mars.
I'd love to know whether there was once life on Mars. I start off thinking:
Well, right now I have no belief either way, and I am unlikely to get any evidence before midnight. So by midnight I will also have no belief either way. And thus by Sally's information there was once life on Mars.
But of course as soon as I accept this argument, I start to believe that there was life on Mars. And I know that if I keep on believing this until midnight, then my belief is false. I quickly see the pattern, and I realize that I don't know what to think! But when I don't know what to think, I default to suspension of judgment. But this, too, leads me astray: For as soon as I think that the appropriate rational attitude for me is suspension of judgment, then I start thinking I will suspend judgment at midnight, and I then conclude that there was once life on Mars. And the circle starts again.

Now, I know I'm not perfectly rational. So I can get out of the circle by concluding that given how confusing this case is, I am probably not going to act rationally. So something non-rational will affect my beliefs by midnight, and I don't know what that will be, so I might as well not speculate until that happens. Sally knows what it will be, but I don't.

But suppose I am perfectly rational. I shall assume that a part of perfect rationality is knowing for sure you're perfectly rational, knowing for sure what you belief, and drawing all the right conclusions from one's evidence. What should I believe in the above case?

Tuesday, April 14, 2015

Truth and Dutch Books

Suppose I initially assigned probability 0.5 to p and 0.5 to ~p. Suppose p is in fact true, and my credence in p comes to be magically increased to 0.8 without my credence in ~p being changed. I thus have inconsistent probabilities: 0.8 for p and 0.5 for ~p. This is supposed to be bad: it lays me open to Dutch Books. For instance, I will accept the following pair of options:

  1. Pay $0.75 to win $1.00 if p
  2. Pay $0.45 to win $1.00 if ~p.
But if I do that, then I will pay $1.20 and get $1.00, for a net loss of $0.20.

Yes, that's an unhappy result. But note that I am actually better off than earlier when my credences were consistent. Earlier I would have rejected (1) since my credence in p was 0.5, but I would have accepted (2). So I would have paid $0.45 and got nothing to show for it. Thus my revision in the direction of truth made me be better off, even though it also led me to accept a Dutch Book.

This suggests that pragmatically and synchronically speaking what matters is truth, not probabilistic consistency. Better be inconsistent and closer to truth than consistent and further from truth.

Diachronically, of course, at least logical inconsistency could be dangerous, as it can lead to lots of absurd conclusions. But in practice we all have inconsistent beliefs and we manage to contain the inconsistency without much in the way of explosion.

So what's so bad about Dutch Books? It seems to be this: an opponent who knows (with certainty) your credences and doesn't know (at least with certainty) whether p is true can offer you a series of bets that you are guaranteed to lose money on. This is a big deal if you're playing an adversarial game against such an opponent. But such games are, I think, a special case, and while they do occur in war, business, sport and other competitive pursuits, we should not let competitive pursuits against fellow humans dictate the nature of rationality to us. And note a curious thing: consistency is not the only available strategy against such an opponent—hiding your credences will also help. If you revise your credence in the direction of truth but your opponent doesn't know about your revision, you will do at least as well as before, and quite possibly better.

Monday, April 13, 2015

Particle accretion and excretion in Aristotelian ontology

In Aristotelian ontology the matter and parts of a substance get their being from their substance. But now we have a problem: we constantly accrete (say, when eating) and excrete (say, when sloughing off skin-cells) particles. These particles seem to exist outside of us, then they exist as part of us, and then one day they come to exist outside of us again. How could their being come from our form, when they existed before they joined up with us—sometimes, presumably, even before we existed at all?

But suppose an ontology for physics on which fields are more fundamental than particles, and particles are like a bump or wave-packet in a field. Then we have a very nice solution to the problem of accretion and excretion.

Imagine two ropes. Rope A is tied by one end to a hook on the wall and the other end of rope A is tied to the end of rope B. And you're holding the other end of rope B. You rapidly move your end of the rope up and down. A wave starts traveling along rope B, then over the knot, and finally along rope A. We are quite untroubled by this description of this ordinary phenomenon.

In particular, it is correct to say that the same wave was traveling along rope A as along rope B. Yet surely the being of a wave in a medium comes from the medium and its movement. So we have a very nice model. Rope A has excreted the wave and rope B has accreted the wave. (You might object that in Aristotelian ontology, ropes aren't substances. Very well: replace them with strings of living kelp.) If the knot is negligible enough, then the shape of the wave will seamlessly travel from rope A into rope B.

I think one reason an Aristotelian is apt to be untroubled by the description is because we don't take waves in a rope ontologically very seriously, just as we shouldn't take kings in chess very seriously. They're certainly not fundamental. Perhaps they don't really exist, but we have merely adopted a mode of speech on which it's correct to talk as if they existed.

However, if a field ontology is correct, we shouldn't take particles any more seriously than waves in a rope. And then we can start with the following model. Among the substances in the world, there are fields, gigantic objects that fill much of spacetime, such as the electromagnetic field. And there are also localized substances, which are tiny things like an elephant or a human or a bacterium. The fields have holes in them, holes perfectly filled by the localized substances. The localized substances exist within the fields much like a diver exists in the ocean—the diver exists in a kind of hole in the ocean's water.

Next, pretty much the same kinds of causal powers that are had by the fields are had by the localized substances. Thus, while strictly speaking there is no electromagnetic field where your body is found, you—i.e., the substance that is you—act causally just as the electromagnetic field would. A picture of the field you might have is of a string whose central piece had rotted out and was seamlessly replaced with a piece of living kelp that happened to have the same material properties as the surrounding string. But you don't just do duty for the electromagnetic field. You do duty for all the fundamental fields.

Because you have pretty much the same kinds of causal powers as the fields that surround you, waves can seamlessly pass through you, much as they can through a well-installed patch in a rubber sheet. You accrete the waves and then excrete them. Some wave packets we call "particles".

Objection: When I digest something, it becomes a part of me. But when a radio wave passes through me, it doesn't become a part of me even for a brief period of time.

Response 1: We shouldn't worry about this. In both cases we're talking about non-fundamental entities. There are many ways of talking. For practical reasons, it's useful to distinguish those wave packets that stick around for a long time from those that pass in and out. So we say that the former are denizens of us and the latter are visitors.

Response 2: Perhaps that's right. Maybe we don't exist in holes in the fields, but rather the fields overlap us. However, when the fields are in us, we take over some, but not all, of their causal powers. The radio wave that travels through me does so by virtue of the electromagnetic field's causal powers, while the particles of the piece of cheese that I digest and which eventually slough off with my dead skin travel through me by virtue of my causal powers. The picture now is more complicated.

Friday, April 10, 2015

Big or small?

Isn't it interesting that we are currently don't know whether the fundamental physical entities are the tiniest of things—particles—or the largest of things—fields—or both?


It sure seems that:

  1. A good human life is an integrated human life.
But suppose we have a completely non-religious view. Wouldn't it be plausible to think that there is a plurality of incommensurable human goods and the good life encompasses a variety of them, but they do not integrate into a unified whole? There is friendship, professional achievement, family, knowledge, justice, etc. Each of these constitutively contributes to a good human life. But why would we expect that there be a single narrative that they should all integrally fit into? The historical Aristotle, of course, did have a highest end, the contemplation of the gods, available in his story, and that provides some integration. But that's religion (though natural religion: he had arguments for the gods' existence and nature).

Nathan Cartagena pointed out to me that one might try to give a secular justification for (1) on empirical grounds: people whose lives are fragmented tend not to do well. I guess this might suggest that if there is no narrative that fits the various human goods into a single story, then one should make one, say by expressly centering one's life on a personally chosen pattern of life. But I think this is unsatisfactory. For I think that the norms that are created by our own choices for ourselves do not bear much weight. They are not much beyond hobbies, and hobbies do not bear much of the meaning of human life.

So all in all, I think the intuition behind (1) requires something like a religious view of life.

Thursday, April 9, 2015

Can something material become immaterial?

Two angels are playing chess. They are immaterial, but have the causal powers of moving physical pieces on the board. Along comes a big snake and swallows the board. No worries: the angels keep on playing, but now the positions of the chess pieces are kept track of in their minds instead. So the king, say, was first a material object. But the king then became an object wholly constituted by the angels' thoughts, and hence immaterial. And it is the same king. While in chess you can get a new queen by promoting a pawn, you don't get a new king within a game.

(Of course, I suspect that the true ontology doesn't include artifacts like chess kings.)

Wednesday, April 8, 2015

The equal weight view

Suppose I assign a credence p to some proposition and you assign a different credence q to it, even though we have the same evidence. We learn of each other's credences. What should we do? The Equal Weight View says that:

  1. I shouldn't give any extra weight to my own credence just because it's mine.
It is also a standard part of the EWV as typically discussed in the literature that:
  1. Each of us should revise the credence in the direction of the other's credence.
Thus if p>q, then I should revise my credence down and you should revise your credence up.

It's an odd fact that the name "Equal Weight View" only connects up with tenet (1). Further, the main intuition behind (1) is a thought that I shouldn't hold myself out as epistemically special, and that does not yield (2). What (1) yields is at most the claim that the method I should use for computing my final credence upon learning of the disagreement should be agnostic as to which of the two initial credences was mine and which was yours. But this is quite compatible with (2) being false. The symmetry condition (1) does nothing to force the final credence to be between the two credences. It could be higher than both credences, or it could be lower than both.

In fact, it's easy to come up with cases where this seems reasonable. A standard case in the literature is where different people calculate their share of the bill in a restaurant differently. Vary the case as follows. You and I are eating together, we agree on a 20% tip and an equal share, and we both see the bill clearly. I calculate my share to be $14.53 with credence p=0.96. You calculate your share to be $14.53 with credence q=0.94. We share our results and credences. Should I lower my confidence, say to 0.95? On the contrary, I should raise it! How unlikely it is, after all, that you should have come to the same conclusion as me if we both made a mistake! Thus we have (1) but not (2): we both revise upward.

There is a general pattern here. We have a proposition that has very low prior probability (in the splitting case the proposition that my share will be $14.53 surely has prior credence less than 0.01). We both get the same evidence, and on the basis of the evidence revise to a high credence. But neither of us is completely confident in the evaluation of the evidence. However, the fact that the other evaluated the evidence in a pretty similar direction overcomes the lack of confidence.

One might think that (2) is at least true in the case where the two credences are on opposite sides of 1/2. But even that may be wrong. Suppose that you and I are looking at the results of some scientific experiment and are calculating the value of some statistic v that is determined entirely by the data. You calculate v at 4.884764447, with credence 0.8, being moderately sure of yourself. But I am much less confident at my arithmetical abilities, and so I conclude that v is 4.884764447 with credence 0.4, We're now on opposite sides of 1/2. Nonetheless, I think your credence should go up: it would be too unlikely that my calculations would support the exact same value that yours did.

One might worry that in these cases, the calculations are unshared evidence, and hence we're not epistemic peers. If that's right, then the bill-splitting story standard in the literature is not a case of epistemic peers, either. And I think it's going to be hard to come up with a useful notion of epistemic peerhood that gives this sort of judgment.

I think what all this suggests is that we aren't going find some general formula for pooling our information in cases of disagreement as some people in the literature have tried (e.g., here). Rather, to pool our information, we need a model of how you and I came to our conclusions, a model of the kinds of errors that we were liable to commit on this path, and then we need to use the model to evaluate how to revise our credences.

Tuesday, April 7, 2015

Plot Armor

When Bob is the lead protagonist of a work, his presence is essential to the plot. Accordingly, the rules of the world seem to bend around him. The very fact that he's the main character protects him from death, serious wounds, and generally all lasting harm (until the plot calls for it). Even psychological damage can be held at bay by Bob's suit of Plot Armor. -tvtropes
It's natural to think of Plot Armor as a bad thing, a kind of invulnerability with no in-world explanation.

But I think it's not as bad as it seems at first sight. Suppose the possible world where Bob's story happens were actual. There is a selection effect as to which people we want to hear a long real-life story about. First, their life has to be interesting. One way for a life to be interesting is for the person to face a lot of danger. Second, their life needs to be sufficiently long to tell a long story about. Third, we don't want to hear too many depressing stories, so we don't want a story about someone whose life completely falls apart. All of this makes it likely that even in the real world, stories like Bob's are going to be likely to be told.

In a world with billions of people, we expect some to have multiple unlikely hair's-breadth escapes. And we'd like to hear stories about them. It's unlikely that escapes this narrow happen to Bob, but not so unlikely that they happen to someone.

So it's false to say that Plot Armor has no in-world explanation. If we imagine the story as being told by an in-world narrator (perhaps an implied one), we can give an in-world explanation in terms of selection by the narrator.

Of course, when the unlikeness of the escapes reaches the point that we wouldn't expect anyone to have them even with the population being as large as the story portrays it to be (Science Fiction about a whole populated galaxy will have more latitude here due to a much larger population to work with), this is problematic.

Monday, April 6, 2015

More against neo-conventionalism about necessity

Assume the background here. So, there is a privileged set N of true sentences from some language L, and N includes, among other things, all mathematical truths. There is also a provability-closure operator C on sets of L-sentences. And, according to our neo-conventionalist, a sentence p of L is necessarily true just in case pC(N).

Moreover, this is supposed to be an account of necessity. Thus, N cannot contain sentences with necessity operators and C must have the property that applying C to a set of sentences without necessity operators does not yield any sentence of the form Lp, where L is the necessity operator (It may be OK to yield tautologies like "Lp or ~Lp" or conjunctions of tautologies like that with sentences in the input set, etc.) If these conditions are not met, then we have an account of necessity that presupposes a prior understanding of necessity.

Now consider an objection. Then not only is L(1=1) true, but it is necessarily true. But now we have a problem. For C(N) by the conditions in the previous paragraph contains no Lp sentences. Hence it doesn't contain the sentence "L(1=1)".

But this was far too quick. For the neo-conventionalist can say that "L(1=1)" is short for something like "'1=1'∈C(N)". And the constraints on absence of necessity operators is compatible with the sentence "'1=1'∈C(N)" itself being a member of C(N).

This means that the language L must contain a name for N, say "N", or some more complex rigidly designating term for it (say a term expressing the union of some sets). Let's suppose that "N" is in L, then. Now, sentences are mathematical objects—finite sequences of symbols in some alphabet. (Or at least that seems the best way to model them for formal purposes.) We can then show (cf. this) that there is a mathematically definable predicate D such that D(y) holds if and only if y is the following sentence:

  • "For all x, if D(x), then ~(xN)."
But if y is this sentence, then y is a mathematical claim. If this mathematical claim isn't true, then y is a member of N. But then y is true. On the other hand, if y is true, then being a mathematical claim it is a member of N, and hence y is false. (This is, of course, structurally like the Liar. But it is legitimate to deploy a version of the Liar against a formal theory whose assumptions enable that deployment. That's what Goedel's incompleteness theorems do.)

To recap. We have an initial difficulty with neo-conventionalism in that no sentences with a necessity operator ends up necessary. That difficulty can be overcome by replacing sentences with a necessity operator with their neo-conventionalist analyses. But doing that gets us into contradiction.

(It's perhaps formally a bit nicer to formulate the above in terms of Goedel numbers. Then we replace Lp with nC*(N*) where n is the Goedel number of p, and C* and N* are the Goedel-number analogues of C and N. Diagonalization then yields a contradiction.)

One place where I imagine pushback is my assumption that C doesn't generate Lp sentences. One might think that C embodies the rule of necessitation, and hence in particular it yields Lp for any theorem p. But I think necessitation presupposes necessity, and so it is illegitimate to use rules that include necessitation to definite necessity. However, this is a part of the argument that I am not deeply confident of.

Sunday, April 5, 2015

Happy Easter!

Happy Easter to all my readers! Christ has indeed risen, turning despair into hope, shining light where there was none.

Saturday, April 4, 2015

Vows to God and expectational views of promises

On Scanlon's expectational view of promises, a crucial part of why a promise is binding is that it creates an expectation of performance in the promisee. But if Sam vows something to God, then that doesn't create any expectation of performance on the part of God. For if Sam will perform the action, God has always known that. And if Sam won't, God's always known that. But if there were a God, one could make promises to him. So the expectational view is false.

Weak promises

Commanding is meant to create an obligating reason for another, while requesting is meant to create a non-obligating one. Promising is meant to create an obligating reason for self. There is a natural spot in illocutionary space, then, for a speech act meant to create a non-obligating reason for self, a speech act type that stands to promising as requesting does to commanding.

We would expect that when I have a normative power, I also have the corresponding weaker powers. If a legislature can bind under pain of ten years' imprisonment, they can bind under pain of a week's imprisonment. If I can create an obligating reason for myself, I can create a non-obligating reason for myself. That's another reason to think that we would have the "weak promise" speech act that creates non-obligating reasons.

I am not sure we have good phrases to express weak promises. We can approximate the force of a weak promise by weaselly promissory wordage like "I'll try to do this" or "I'll take your needs into account".

Friday, April 3, 2015

Promises without release

Normally, if I promise you something, you can release me from my promise. But suppose that I promise to fire you if you are once again inexcusably late to a meeting with upper management. And you are inexcusably late. It would have been a pointless promise if you could release me now!

So, either that wasn't a promise or not all promises allow for release by the promisee. I want to explore the second option. So it was a promise but I can't be released.

Consider another case. I promise you that if it rains during a full moon, I will fire you. My intuition is that in the lateness case, the promise takes—I am bound. But the full moon case is not a case of a successful promise (at least under normal circumstances where there is no benefit to you of being fired; we can imagine circumstances in which you expectantly await the full moon and pray for rain)—I have no obligation to fire you, even if your contract allows me to fire you for no cause.

Why the difference? My best explanation is that in the lateness case, your punishment is a benefit to you. It is good for one to suffer a just punishment (I am assuming it's just—if not, then I think the promise doesn't take). And only benefits can be promised.

But that only heightens the first puzzle. If I promise you a good, can't you sacrifice that good, thereby releasing me?

There is another case. You promise me that if I ever become cynical and lose my ideals, you will try to convince me to return to them. I have become cynical and lost my ideals. It would be silly to think I could release you from your promise so as to avoid bothering with your arguments.

This and the punishment cases are cases where the good that is promised is one that the promisee doesn't want when the good is bestowed. In the punishment case, perhaps the promisee never wanted it, while in the ideals case the promisee wanted it but lost the desire.

Maybe there is no such thing as a general release condition on promises? Maybe it's just typically an implicit part of the promise, unless either the content or the context or explicit speech cancels it. But when we promise punishment or convincing, the content of the promise removes the implicit "unless you don't want me to"?

Here's another idea. Catholic canon law says that a private vow to God can be commuted to a better vow. If I vowed to give a dollar to the soup kitchen, I can commute it to volunteering for a weekend. Normally, even if there is no "unless you don't want me to" qualifier in a promise, the promise becomes better when the qualifier is added. So normally the promise can be released from, since I can just add the qualifier, thereby improving the promise, and then get released by your triggering the exception clause. But adding the "unless you don't want me to" clause to the punishment or returning-to-ideals promises doesn't make the promise better. It makes it worse.

Note that it is essential to this solution that being punished is good for the punishee.

Thursday, April 2, 2015

An argument against neo-conventionalism in modality

The neo-conventionalist account of necessity holds that necessity is just a messy property accidentally created by our conventions. We historically happened to distinguish a certain family N of true sentences. For instance, N might include the mathematical truths, the truths about the identities of natural kinds (e.g., "water = H2O"), the truths about the scope of composition, etc. Then we said that a sentence is necessarily true if and only if it is a member of the closure C(N) of N under some logical deduction rules. (Alternately, one might do this in terms of propositions.)

Here is a criterion of adequacy for a theory of modality. That theory must yield the following obvious, uncontroversial and innocuous-looking fact:

  1. Necessarily, some sentence is not necessary.
Some things just have to be possible. (Note: In System T, if p is any tautology, then necessarily ~p is not necessary.)

A neo-conventionalist proposal consists of a family N of true sentences and a closure operator C. For any neo-conventionalist proposal, we then can raise the question whether it satisfies condition (1). Formulating this condition precisely within neo-conventionalism takes a bit of work, but basically it'll say something like this:

  1. "Some sentence is not a member of C(N)" is a member of C(N).

There is a more intuitive way of thinking about the above condition. A family A of sentences is such that C(A) is all sentences if and only if the family A is C-inconsistent, i.e., inconsistent with respect to the rules defining C. (This is actually a fairly normal way to define inconsistency in a wide range of logics.) So (2) basically says:

  1. "N is C-consistent" is C-provable from N.

Put that way, we see that our innocuously weak assumption (1) is actually a pretty strong condition on a neo-conventionalist proposal. It is certainly not guaranteed to be satisfied. For instance, a neo-conventionalist proposal where N is a finite set of axioms and C is a formal system (with the axioms and formal system sufficient for the operations in C) will fail to satisfy (3) by Goedel's Second Incompleteness Theorem.

This last observation shows that the question of whether a neo-conventionalist proposal satisfies (3) can be far from trivial. Now, in practice nobody espouses a neo-conventionalist proposal with a finite set of axioms. All the proposals in the literature that I've seen just throw all mathematical truths in, so Goedel's Second Incompleteness Theorem is not applicable.

But even if it's not applicable, it shows that the question is far from trivial. And that is unsatisfactory. For (1) is obviously true. Yet on a neo-conventionalist proposal it becomes a very difficult question. That by itself is a reason to be suspicious of neo-conventionalism. In fact, we might say: We know (1) to be true; but if neo-conventionalism is true, we do not know (1) to be true; hence, neo-conventionalism is not true.

Now, one can probably craft neo-conventionalist proposals that satisfy our constraint. For instance, if N is just the set of mathematical truths (considered broadly enough to include truths about what sentences are C-provable from what) then "N is C-consistent" will be true, and hence a member of N, and hence C-provable from N. But of course that's just another proposal that nobody endorses: there are more necessities than the mathematical ones.

And here's the nub. The neo-conventionalist isn't just trying to craft some proposal or other that satisfies (1). She is proposing to let N be those truths that we have conventionally distinguished (she may not be making an analogous move about C; she could let C be closure under provability in the One True Logic). But we did not historically craft our choice of distinguished truths so as to ensure (3). Consider the following curious definition of an even number:

  1. A number is even if and only if it has the same parity as the number of words in my previous blog post.
This account might in fact get things right—if we are lucky enough that the number of words in my previous post is divisible in two. But I did not choose my wording in that post with that divisibility in mind. I chose the wording for completely different reasons. We don't have reason to think, without actually counting, that (4) is correct. And even if it is correct, it is only my luck that I happened to choose an even number of words, and we don't want a theory to rest on luck like that.

Wednesday, April 1, 2015

An Axiom of Choice strong enough to puzzle

All the main puzzles that follow from the Axiom of Choice (AC)--nonmeasurable sets, Banach-Taski and guessing future coin tosses--need only a weaker version of AC. One weaker version that suffices is this:

(*) There is a choice function for any partition of the interval (0,1) into non-empty countable sets.

Now imagine worlds with point-sized particles that never move, but can perish and come into existence. The world starts at time 0. Each particle has a lifetime between 0 and 1, exclusive. Some locations in the world are never occupied by a particle. Call these "vacant". At all other locations, a particle comes into existence at time 0. Two particles never occupy the same location at the same time. Call such worlds p-worlds.
For each non-vacant location x in a p-world w, there is an associated set L(w,x) of numbers in (0,1), where a number y is in L(w,x) iff some particle at x has lifetime of length y. I now need a crucial metaphysical plenitude assumption:

(**) For any set S such that (a) every member of S is a countable non-empty collection of members of (0,1) and (b) the cardinality of S is at most that of the continuum, there is a p-world w such for each A in S there is a unique location x in w such that L(w,x)=A.
In other words, any set S satisfying (a) and (b) is the set of sets of lifetime lengths for non-vacant locations in some p-world, without duplication.

Given the plenitude assumption, I get the version of AC needed for the paradoxes. For given a partition S of (0,1) into countable sets, there will be a p-world as in (**). Given a member A of S, there will be a unique location x such that L(w,x)=A. Let f(A) be the lifetime of the first particle at x in w. This is our choice function.

So the major paradoxes of AC follow from a plausible plenitude assumption about possible worlds.

Weak and strong incommensurability

X and Y are weakly incommensurable iff there is a dimension of evaluation where X beats Y and a dimension of evaluation where Y beats X. X and Y are strongly incommensurable iff they are weakly incommensurable and a rational agent doesn't have on balance reason to choose X over Y and doesn't have on balance reason to choose Y over X.

Weak incommensurability is precisely what is needed for the possibility of a rational agent choosing either over the other.

Weak incommensurability is evidence of strong incommensurability. But there are cases where weak incommensurability fails to yield strong incommensurability. One kind of case is extremity. If one is choosing between being a superb nurse and a very mediocre mathematician, there is weak incommensurability, but one may have on balance reason to be a nurse (all other things being equal). But when choosing between being a nurse and a mathematician and one's professional quality would be moderately close, it's also strongly incommensurable.