Thursday, August 17, 2017

Yet another infinite lottery machine

In a number of posts over the past several years, I’ve explored various ways to make a countably infinite fair lottery machine (assuming causal finitism is false), typically using supertasks in some way.

Here’s another, slightly simplified from a construction in Norton. Suppose we toss a countably infinite number of fair coins to make an array with infinitely many infinite rows that could look like this:


Make sure that nobody looks at the coins after they are tossed. Here’s something that could happen: each row of the array contains one and only one tails. This is unlikely (probability zero; Norton originally said it's nonmeasurable, but that was a mistake, and we're coauthoring a correction to his paper) but possible. Have a robot scan the array—a supertask will be needed—to verify whether this unlikely event has happened. If not, we have failed to make the machine. But if yes, our array will look relevantly like:


Continue making sure nobody looks at the coins. Put a robot at the beginning of the first row. Now, you have an countably infinite fair lottery machine that you can use over and over. To use it, just tell the robot to scan the row it’s at, announce the position of the lone tails, and move to the beginning of the next row. Applied to the above array, you will get the sequence of results 3,6,3,….

Of course, it’s very unlikely that we will succeed in making the machine (the probability is zero). But we might. And once we do, we can run as many paradoxes of infinity as we like. And we might even find ourselves lucky enough to be in a universe where some natural random process has already generated such a lucky array, in which case we don’t even have to flip the coins.

Once we have the machine, we can have lots of fun with it. For instance, it seems antecedently really unlikely that the first hundred times you run the machine, the numbers you get will be in increasing order. But no matter how many numbers you've pulled from the machine, you are all but certain that the next number will be bigger than any of them.

Wednesday, August 16, 2017

Consent and euthanasia

I once gave an argument against euthanasia where the controversial center of the argument could be summarized as follows:

  1. Euthanasia would at most be permissible in cases of valid consent and great suffering.

  2. Great suffering is an external threat that removes valid consent.

  3. So, euthanasia is never permissible.

But the officer case in my recent post about promises and duress suggests that (2) may be mistaken. In that case, I am an officer captured by an enemy officer. I have knowledge that imperils the other officer’s mission. The officer lets me live, however, on the condition that I promise to stay put for 24 hours, an offer I accept. My promise to stay put seems valid, even though it was made in order to avoid great harm (namely, death). It is difficult to see exactly why my promise is valid, but I argue that the enemy officer is not threatening me in order to elicit a promise from me, but rather I am in dangerous circumstances that I can only get out of by making the promise, a promise that is nonetheless valid, much as the promise to pay a merchant for a drink is valid even if one is dying of thirst.

Now, if a doctor were to torture me in order to get me to consent to being killed by her, any death-welcoming words from me would not constitute valid consent, just as promises elicited by threats made precisely to elicit them are invalid. But euthanasia is not like that: the suffering isn’t even caused by the doctor. It doesn’t seem right to speak of the patient’s suffering as a threat in the sense of “threat” that always invalidates promises and consent.

I could, of course, be mistaken about the officer case. Maybe the promise to stay put under the circumstances really is invalid. If so, then (2) could still be true, and the argument against euthanasia stays.

But suppose I am right about the officer case, and suppose that (2) is false. Can the argument be salvaged? (Of course, even if it can’t, I still think euthanasia is wrong. It is wrong to kill the innocent, regardless of consequences or consent. But that’s a different line of thought.) Well, let me try.

Even if great suffering is not an external threat that removes valid consent, great suffering makes one less than fully responsible for actions made to escape that suffering (we shouldn’t call the person who betrayed her friends under torture a traitor). Now, how fully responsible one needs to be in order for one’s consent to be valid depends on how momentous the potential adverse consequences of the decision are. For instance, if I consent to a painkiller that has little in the way of side-effects, I don’t need to have much responsibility in order for my consent to be valid. On the other hand, suppose that the only way out of suffering would be a pill whose owner is only willing to sell it in exchange for twenty years of servitude. I doubt that one’s suffering-elicited consent to twenty years of servitude is valid. Compare how the Catholic Church grants annulments for marriages when responsibility is significantly reduced. Some of the circumstances where annulments are granted are ones where the agent would have sufficient responsibility in order to make valid promises that are less momentous than marriage vows, and this seems right. In fact, in the officer case, it seems that if the promise I made were more momentous than just staying put for 24 hours, it might not be valid. But it is hard to get more momentous a decision than a decision whether to be killed. So the amount of responsibility needed in order to make that decision is much higher than in the case of more ordinary decisions. And it is very plausible that great suffering (or fear of such) excludes that responsibility, or at the very least that it should make the doctor not have sufficient confidence that valid consent has been given.

If this is right, then we can replace (2) with:

  1. Great suffering (or fear thereof) removes valid consent to decisions as momentous as the decision to die.

And the argument still works.

Monday, August 14, 2017

Difficult questions about promises and duress

It is widely accepted that you cannot force someone to make a valid promise. If a robber after finding that I have no valuables with me puts a gun to my head and says: “I will shoot you unless you promise to go home and bring me all of the jewelry there”, and I say “I promise”, my promise seems to be null and void.

But suppose I am a cavalry officer captured by an enemy officer. The enemy officer is in a hurry to complete a mission, and it is crucial to his military ends that I not ride straight back to my headquarters and report what I saw him doing. He does not, however, have the time to tie me up, and hence he prepares to kill me. I yell: “I give you my word of honor as an officer that I will stay in this location for 24 hours.” He trusts me and rides on his way. (The setting for this is more than a hundred years ago.)

However, if promises made under duress are invalid, then the enemy officer should not trust me. One can only trust someone to do something when in some way a good feature of the person impels them to do that thing. (I can predict that a thief will steal my money if I leave it unprotected, but I don’t trust the thief to do that.) But there is no virtue in keeping void promises, since such promises do not generate moral reasons. In fact, if the promise is void, then I might even have a moral duty to ride back and report what I have seen. One shouldn’t trust someone to do something contrary to moral duty.

Perhaps, though, there is a relevant difference between the case of an officer giving parole to another, and the case of the robber. The enemy officer is not compelling me to make the promise. It’s my own idea to make the promise. Of course, if I don’t make the promise, I will die. But that fact doesn’t make for promise-canceling duress. Say, I am dying of thirst, and the only drink available is the diet gingerale that a greedy merchant is selling and which she would never give away for free. So I say: “I promise to pay you back tomorrow as I don’t have any cash with me.” I have made the promise in order to save my life. If the merchant gives me the gingerale, the promise is surely valid, and I must pay the merchant back tomorrow.

Is the relevant difference, perhaps, that I originate the idea of the promise in the officer case, but not in the robber case? But in the merchant case, I would be no less obligated to pay the merchant back if we had a little dialogue: “Could you give me a drink, as I’m dying of thirst and I don’t have any cash?” – “Only if you promise to pay me back tomorrow.”

Likewise, in the officer case, it really shouldn’t matter who originates the idea. Imagine that it never occurred to me to make the promise, but a bystander suggests it. Surely that doesn’t affect the binding force of the promise. But suppose that the bystander makes the suggestion in a language I don’t understand, and I ask the enemy officer what the bystander says, and he says: “The bystander suggests you give your word of honor as an officer to stay put for 24 hours.” Surely it also makes no moral difference that the enemy officer acts as an interpreter, and hence is the proximate origin of the idea. Would it make a difference if there were no helpful bystander and the enemy officer said of his own accord: “In these circumstances, officers often make promises on their honor to stay put”? I don’t think so.

I think that there is still a difference between the robber case and that of the enemy officer who helpfully suggests that one make the promise. But I have a really hard time pinning down the difference. Note that the enemy officer might be engaged in an unjust war, much as the robber is engaged in unjust robbery. So neither has a moral right to demand things of me.

There is a subtle difference between the robber and officer cases. The robber is threatening your life in order to get you to make the promise. The promise is something that the robber is pursuing as the means to her end, namely the obtaining of jewelry. My being killed will not achieve the robber’s purpose at all. If the robber knew that I wouldn’t make the promise, she wouldn’t kill me, at least as far as the ends involved in the promise (namely, the obtaining of my valuables) go. But the enemy officer’s end, namely the safety of his mission, would be even more effectively achieved by killing me. The enemy officer’s suggestion that I make my promise is a mercy. The robber’s suggestion that I make my promise isn’t a mercy.

Does this matter? Maybe it does, and for at least two reasons. First, the robber is threatening my life primarily in order to force a promise. The enemy officer isn’t threatening my life primarily in order to force a promise: the threat would be there even if I were unable to make promises (or were untrustworthy, etc.). So there is a sense in which the robber is more fully forcing a promise out of me.

Second, it is good for human beings to have a practice of giving and keeping promises in the officer types of circumstances, since such a practice saves lives. But a practice of giving and keeping promises in the robber types of circumstances, since such a practice only encourages robbers to force promises out of people. Perhaps the fact that one kind of practice is beneficial and the other is harmful is evidence that the one kind of practice is normative to human beings and the other is not. (This will likely be the case given natural law, divine command, rule-utilitarianism, and maybe some other moral theories.)

Third, the case of the officer is much more like the case of the merchant. There is a circumstance in both cases that threaten my life independently of any considerations of promises—dehydration and an enemy officer whom I’ve seen on his secret mission. In both cases, it turns out that the making of a promise can get me out of these circumstances, but the circumstances weren’t engineered in order to get me to make the promise. But the case of the robber is very different from that of the merchant. (Interesting test case: the merchant drained the oases in the desert so as to sell drinks to dehydrated travelers. This seems to me to be rather closer to the robber case, but I am not completely sure.)

Maybe, though, I’m wrong about the robber case. I have to say that I am uncomfortable with voidly promising the robber that I will get the valuables when I don’t expect to do so—there seems to be a lie involved, and lying is wrong even to save one’s life. Or at least a kind of dishonesty. But this suggests that if I were planning on bringing the valuables, I would be acting more honestly in saying it. And that makes the situation resemble a valid promise. Maybe not, though. Maybe it’s wrong to say “I will bring the valuables” when one isn’t planning on doing so, but once one says it, one has no obligation to bring them. I don’t know. (This is related to this sort of a case. Suppose I don’t expect that there will be any yellow car parked on your street tonight, but I assert dishonestly in the morning that there will be a yellow car parked on your street in the evening. In the early afternoon, I am filled with contrition for my dishonesty to you. Normally, I should try to undo the effect of dishonesty by coming clean to the person I was dishonest to. But suppose I cannot get in touch with you. However, what I can do is go to the car rental place, rent a yellow car and park it on your street. Do I have any moral reason to do so? I don’t know. Not in general, I think. But if you were depending on the presence of the yellow car—maybe you made a large bet about it wit a neighbor—then maybe I should do it.)

Computer languages

It is valuable, especially for philosophers, to learn languages in order to learn to see things from a different point of view, to think differently.

This is usually promoted with respect to natural languages. But the goal of learning to think differently is also furthered by learning logical languages and computer languages. In regard to computer languages, it seems that what is particularly valuable is learning languages representing opposed paradigms: low-level vs. high-level, imperative vs. functional, procedural vs. object-oriented, data-code-separating vs. not, etc. These make for differences in how one sees things that are if anything greater than the differences in how one sees things across natural human languages.

To be honest, though, I’ve only ever tried to learn one language expressly for the above purpose, and I didn’t persevere: it was Haskell, which I wanted to learn as an example of functional programming. I ended up, however, learning OpenSCAD which is a special-purpose functional language for describing 3D solids, though I didn’t do that to change how I think, but simply to make stuff my 3D printer can print. Still, I guess, I learned a bit about functional programming.

My next computer language task will probably be to learn a bit of Verilog and/or VHDL, which should be fun. I don’t know whether it will lead to thinking differently, but it might, in that thinking of an algorithm as something that is implemented in often concurrent digital logic rather than in a series of sequential instructions might lead to a shift in how I think at least about algorithms. I’ve ordered a cheap Cyclone II FPGA from AliExpress ($17 including the USB Blaster for programming it) to use with the code, which should make the fun even greater.

All that said, I don’t know that I can identify any specific philosophical insights I had as a result of knowing computer languages. Maybe it’s a subtler shift in how I think. Or maybe the goal of thinking philosophically differently just isn’t furthered in these ways. But it’s fun to learn computer languages anyway.

Thursday, August 10, 2017

Uncountable independent trials

Suppose that I am throwing a perfectly sharp dart uniformly randomly at a continuous target. The chance that I will hit the center is zero.

What if I throw an infinite number of independent darts at the target? Do I improve my chances of hitting the center at least once?

Things depend on what size of infinity of darts I throw. Suppose I throw a countable infinity of darts. Then I don’t improve my chances: classical probability says that the union of countably many zero-probability events has zero probability.

What if I throw an uncountable infinity of darts? The answer is that the usual way of modeling independent events does not assign any meaningful probabilities to whether I hit the center at least once. Indeed, the event that I hit the center at least once is “saturated nonmeasurable”, i.e., it is nonmeasurable and every measurable subset of it has probability zero and every measurable superset of it has probability one.

Proposition: Assume the Axiom of Choice. Let P be any probability measure on a set Ω and let N be any non-empty event with P(N)=0. Let I be any uncountable index set. Let H be the subset of the product space ΩI consisting of those sequences ω that hit N, i.e., ones such that for some i we have ω(i)∈N. Then H is saturated nonmeasurable with respect to the I-fold product measure PI (and hence with respect to its completion).

One conclusion to draw is that the event H of hitting the center at least once in our uncountable number of throws in fact has a weird “nonmeasurable chance” of happening, one perhaps that can be expressed as the interval [0, 1]. But I think there is a different philosophical conclusion to be drawn: the usual “product measure” model of independent trials does not capture the phenomenon it is meant to capture in the case of an uncountable number of trials. The model needs to be enriched with further information that will then give us a genuine chance for H. Saturated nonmeasurability is a way of capturing the fact that the product measure can be extended to a measure that assigns any numerical probability between 0 and 1 (inclusive) one wishes. And one requires further data about the system in order to assign that numerical probability.

Let me illustrate this as follows. Consider the original single-case dart throwing system. Normally one describes the outcome of the system’s trials by the position z of the tip of the dart, so that the sample space Ω equals the set of possible positions. But we can also take a richer sample space Ω* which includes all the possible tip positions plus one more outcome, α, the event of the whole system ceasing to exist, in violation of the conservation of mass-energy. Of course, to be physically correct, we assign chance zero to outcome α.

Now, let O be the center of the target. Here are two intuitions:

  1. If the number of trials has a cardinality much greater than that of the continuum, it is very likely that O will result on some trial.

  2. No matter how many trials—even a large infinity—have been performed, α will not occur.

But the original single-case system based on the sample space Ω* does not distinguish O and α probabilistically in any way. Let ψ be a bijection of Ω* to itself that swaps O and α but keeps everything else fixed. Then P(ψ[A]) = P(A) for any measurable subset A of Ω* (this follows from the fact that the probability of O is equal to the probability of α, both being zero), and so with respect to the standard probability measure on Ω*, there is no probabilistic difference between O and α.

If I am right about (1) and (2), then what happens in a sufficiently large number of trials is not captured by the classical chances in the single-case situation. That classical probabilities do not capture all the information about chances is something we should already have known from cases involving conditional probabilities. For instance P({O}|{O, α}) = 1 and P({α}|{O, α}) = 0, even though O and α are on par.

One standard solution to conditional probability case is infinitesimals. Perhaps P({α}) is an infinitesimal ι but P({O}) is exactly zero. In that case, we may indeed be able to make sense of (1) and (2). But infinitesimals are not a good model on other grounds. (See Section 3 here.)

Thinking about the difficulties with infinitesimals, I get this intuition: we want to get probabilistic information about the single-case event that has a higher resolution than is given by classical real-valued probabilities but lower resolution than is given by infinitesimals. Here is a possibility. Those subsets of the outcome space that have probability zero also get attached to them a monotone-increasing function from cardinalities to the set [0, 1]. If N is such a subset, and it gets attached to it the function fN, then fN(κ) tells us the probability that κ independent trials will yield at least one outcome in N.

We can then argue that fN(κ) is always 0 or 1 for infinite. Here is why. Suppose fN(κ)>0. Then, κ must be infinite, since if κ is finite then fN(κ)=1 − (1 − P(N))κ = 0 as P(N)=0. But fN(κ + κ)=(fN(κ))2, since probabilities of independent events multiply, and κ + κ = κ (assuming the Axiom of Choice), so that fN(κ)=(fN(κ))2, which implies that fN(κ) is zero or one. We can come up with other constraints on fN. For instance, if C is the union of A and B, then fC(κ) is the greater of fA(κ) and fB(κ).

Such an approach could help get a solution to a different problem, the problem of characterizing deterministic causation. To a first approximation, the solution would go as follows. Start with the inadequate story that deterministic causation is chancy causation with chance 1. (This is inadequate, because in the original dart-throwing case, the chance of missing the center is 1, but throwing the dart does not deterministically cause one to hit a point other than the center.) Then say that deterministic causation is chancy causation such that the failure event F is such that fF(κ)=0 for every cardinal κ.

But maybe instead of all this, one could just deny that there are meaningful chances to be assigned to events like the event of uncountably many trials missing or hitting the center of the target.

Sketch of proof of Proposition: The product space ΩI is the space of all functions ω from I to Ω, with the product measure PI generated by the product measures of cylinder sets. The cylinder sets are product sets of the form A = ∏iIAi such that there is a finite J ⊆ I such that Ai = Ω for i ∉ J, and the product measure of A is defined to be ∏iJP(Ai).

First I will show that there is an extension Q of PI such that Q(H)=0 (an extension of a measure is a measure on a larger σ-algebra that agrees with the original measure on the smaller σ-algebra). Any PI-measurable subset of H will then have Q measure zero, and hence will have PI-measure zero since Q extends PI.

Let Q1 be the restriction of P to Ω − N (this is still normalized to 1 as N is a null set). Let Q1I be the product measure on (Ω − N)I. Let Q be a measure on Ω defined by Q(A)=Q1I(A ∩ ΩN). Consider a cylinder set A = ∏iIAi where there is a finite J ⊆ I such that Ai = Ω whenever i ∉ J. Then
Q(A)=∏iJQ1(Ai − N)=∏iJP(Ai − N)=∏iJP(Ai)=PN(A).
Since PN and Q agree on cylinder sets, by the definition of the product measure, Q is an extension of PN.

To show that H is saturated nonmeasurable, we now only need to show that any PI-measurable set in the complement of H must have probability zero. Let A be any PI-measurable set in the complement of H. Then A is of the form {ω ∈ ΩI : F(ω)}, where F(ω) is a condition involving only coordinates of ω numbered by a fixed countable set of indices from I (i.e., there is a countable subset J of I and a subset B of ΩJ such that F(ω) if and only if ω|J is a member of B, where ω|J is the restriction of ω to J). But no such condition can exclude the possibility that a coordinate of Ω outside that countable set is in H, unless the condition is entirely unsatisfiable, and hence no such set A lies in the complement of H, unless the set is empty. And that’s all we need to show.

Tuesday, August 8, 2017

Naturalists about mind should be Aristotelians

  1. If non-Aristotelian naturalism about mind is true, a causal theory of reference is true.

  2. If non-Aristotelian naturalism about mind is true, then normative states of affairs do not cause any natural events.

  3. If naturalism about mind is true, our thoughts are natural events.

  4. If a causal theory of reference is true and normative states of affairs do not cause any thoughts, then we do not have any thoughts about normative states of affairs.

  5. So, if non-Aristotelian naturalism about mind is true, then we do not have any thoughts about normative states of affairs. (1-4)

  6. I think that I should avoid false belief.

  7. That I should avoid false belief is a normative state of affairs.

  8. So, I have a thought about a normative state of affairs. (6-7)

  9. So, non-Aristotelian naturalism about mind is not true. (5 and 8)

Note that the Aristotelian naturalist will deny (2), for she thinks that normative states of affairs cause natural events through final (and, less obviously, formal) causation, which is a species of causation.

I think the non-Aristotelian naturalist’s best bet is probably to deny (2) as well, on the grounds that normative properties are identical with natural properties. But there are now two possibilities. Either normative properties are identical with natural properties that are also “natural” in the sense of David Lewis—i.e., fundamental or “structural”—or not. A view on which normative properties are identical with fundamental or “structural” natural properties is not a plausible one. This is not plausible outside of Aristotelian naturalism. But if the normative properties are identical with non-fundamental natural properties, then too much debate in ethics and epistemology threatens to become merely verbal in the Ted Sider sense: “Am I using ‘justified’ or ‘right’ for this non-structural natural property or that one?”


In conversation last week, I said to my father that my laptop battery has a “finite number of charge cycles”.

Now, if someone said to me that a battery had fewer than a billion charge cycles, I’d take the speaker to be implicating that it has quite a lot of them, probably between half a billion and a billion. And even besides that implicature, if all my information were that the battery has fewer than a billion charge cycles, then it would seem natural to take a uniform distribution from 0 to 999,999,999 and think that it is extremely likely that it has at least a million charge cycles.

One might think something similar would be the case with saying that the battery has a finite number of charge cycles. After all, that statement is logically equivalent to the statement that it has fewer than ℵ0 charge cycles, which by analogy should implicate that it has quite a lot of them, or at least give rise to a uniform distribution between 0, inclusive, and ℵ0, exclusive. But no! To say that it has a finite number of charge cycles seems to implicate something quite different: it implicates that the number is sufficiently limited that running into the limit is a serious possibility.

Actually, this may go beyond implicature. Perhaps outside of specialized domains like mathematics and philosophy, “finite” typically means something like not practically infinite, where “practically infinite” means beyond all practical limitations (e.g., the amount of energy in the sun is practically infinite). Thus, the finite is what has practical limits. (But see also this aberrant usage.)

Thursday, August 3, 2017

Connected and scattered objects

Intuitively, some physical objects, like a typical organism, are connected, while other physical objects, like a typical chess set spilled on a table, are disconnected or scattered.

What does it mean for an object O that occupies some region R of space to be connected? There is a standard topological definition of a region R being connected (there are no open sets U and V whose intersections with R are non-empty such that R ⊆ U ∪ V), and so we could say that O is connected if and only if the region R occupied by it is connected.

But this definition doesn’t work well if space is discrete. The most natural topology on a discrete space would make every region containing two or more points be disconnected. But it seems that even if space were discrete, it would make sense to talk of a typical organism as connected.

If the space is a regular rectangular grid, then we can try to give a non-topological definition of connectedness: a region is connected provided that any two points in it can be joined by a sequence of points such that any two successive points are neighbors. But then we need to make a decision as to what points count as neighbors. For instance, while it seems obvious that (0,0,0) and (0,0,1) are neighbors (assuming the points have integer Cartesian coordinates), it is less clear whether diagonal pairs like (0,0,0) and (1,1,1) are neighbors. But we’re doing metaphysics, not mathematics. We shouldn’t just stipulate the neighbor relation. So there has to be some objective fact about the space that decides which pairs are neighbors. And things just get more complicated if the space is not a regular rectangular grid.

Perhaps we should suppose that a physical discrete space would have to come along with a physical “neighbor” structure, which would specify which (unordered, let’s suppose for now) pairs of points are neighbors. Mathematically speaking, this would turn the space into a graph: a mathematical object with vertices (points) and edges (the neighbor-pairs). So perhaps there could be at least two kinds of regular rectangular grid spaces, one in which an object that occupies precisely (0,0,0) and (1,1,1) is connected and another in which such an object is scattered.

But we can’t use this graph-theoretic solution in continuous spaces. For here is something very intuitive about Euclidean space: if there is a third point c on the line segment between the two points a and b, then a and b are not neighbors, because c is a better candidate for being a’s neighbor than b. But in Euclidean space, there is always such a third point, so no two points are neighbors. Fortunately, in Euclidean space we can use the topological notion.

But now we have a bit of a puzzle. We have a topological notion of a physical object being connected for objects in a continuous space and a graph theoretic notion for objects in a discrete space. Neither notion reduces to the other. In fact, we can apply the topological one to objects in a discrete space, and conclude that all objects that occupy more than one point are scattered, and the graph theoretic one to objects in Euclidean space, and also conclude that all objects that occupy more than one points are scattered.

Maybe we should have a disjunctive notion: an object is connected if and only if it is graph-theoretically connected in a space with a neighbor-relation or topologically connected in a space with a topological structure.

That’s not too bad, but it makes the notion of the connectedness of a physical object be a rather unnatural and gerrymandered notion. Maybe that’s how it has to be.

Or maybe only one of the two kinds of spaces is actually a possible physical space. Perhaps physical space must have a topological structure. Or maybe it must have a graph-theoretic structure.

Here’s a different suggestion. Given a region of space R, we can define a binary relation cR where cR(a, b) if and only if the laws of nature allow for a causal influence to propagate from a to b without leaving R. Then say that a region of space R is connected provided that any two distinct points can be joined by a sequence of points such that successive points are cR-related in one order or the other (i.e., if di and di+1 are successive points then cR(di, di+1) or cR(di+1, di)).

On this story, if we have a universe with pervasive immediate action at a distance, like in the case of Newtonian gravity, all physical objects end up connected. If we have a discrete universe with a neighbor structure and causal influences can propagate between neighbors and only between them, we recover the graph-theoretic notion.

Wednesday, August 2, 2017

Disconnected bodies and lives

We can imagine what it is like for a living to have a spatially disconnected body. First, if we are made of point particles, we all are spatially disconnected. Second, when a gecko is attacked, it can shed a tail. That tail then continues wiggling for a while in order to distract the pursuer. A good case can be made that the gecko’s shed tail remains a part of the gecko’s body while it is wiggling. After all, it continues to be biologically active in support of the gecko’s survival. Third, there is the metaphysical theory on which sperm remains a part of the male even after it is emitted.

But even if all these theories are wrong, we should have very little difficulty in understanding what it would mean for a living thing to have a spatially disconnected body.

What about a living thing having a temporally disconnected life? Again, I think it is not so difficult. It could be the case that when an insect is frozen, it ceases to live (or exist), but then comes back to life when defrosted. And even if that’s not the case, we understand what it would mean for this to be the case.

But so far this regarded external space and external time. What about internally spatially disconnected bodies and internally temporally disconnected lives? The gecko’s tail and sperm examples work just as well for internal as well as external space. So there is no conceptual difficulty about a living thing having a disconnected body in its inner space.

But it is much more difficult to imagine how an organism could have an internal-time disconnect in its life. Suppose the organism ceases to exist and then comes back into existence. It seems that its internal time is uninterrupted by the external-time interval of non-existence. An external-time interval of non-existence seems to be simply a case of forward time-travel, and time-travel does not induce disconnectes in internal time. Granted, the organism may have some different properties when it comes back into existence—for instance, its neural system might be damaged. But that’s just a matter of an instantaneous change in the neural system rather than of a disconnect in internal time. (Note that internal time is different from subjective time. When we go under general anesthesia, internal time keeps on flowing, but subjective time pauses. Plants have internal time but don’t have subjective time.)

This suggests an interesting apparent difference between internal time and internal space: spatial discontinuities are possible but temporal ones are not.

This way of formulating the difference is misleading, however, if some version of four-dimensionalism is correct. The gecko’s tail in my story is four-dimensional. This four-dimensional thing is connected to the four-dimensional thing that is the rest of the gecko’s body. There is no disconnection in the gecko from a four-dimensional perspective. (The point particle case is more complicated. Topologically, the internal space will be disconnected, but I think that’s not the relevant notion of disconnection.)

This suggests an interesting pair of hypotheses:

  • If three-dimensionalism is true, there is a disanalogy between internal time and internal space with respect to living things at least, in that internal spatial disconnection of a living thing is possible but internal temporal disconnection of a living thing is not possible.

  • If four-dimensionalism is true, then living things are always internally spatiotemporally connected.

But maybe these are just contingent truths Terry Pratchett has a character who is a witch with two spatially disconnected bodies. As far as the book says, she’s always been that way. And that seems possible to me. So maybe the four-dimensional hypothesis is only contingently true.

And maybe God could make a being that lives two lives, each in a different century, with no internal temporal connection between them? If so, then the three-dimensional hypothesis is also only contingently true.

I am not going anywhere with this. Just thinking about the options. And not sure what to think.

Monday, July 31, 2017

Self-consciousness and AI

Some people think that self-consciousness is a big deal, that it’s the sort of thing that might be hard for an artificial intelligence system to achieve.

I think consciousness and intentionality are a big deal, that they are the sort of thing that would be hard or impossible for an artificial intelligence system to achieve. But I wonder whether if we could have consciousness and intentionality in an artificial intelligence system, would self-consciousness be much of an additional difficulty. Argument:

  1. If a computer can have consciousness and intentionality, a computer can have a conscious awareness whose object would be aptly expressible by it with the phrase “that the temperature here is 300K”.

  2. If a computer can have a conscious awareness whose object would be aptly expressible by it with the phrase “that the temperature here is 300K”, then it can have a conscious awareness whose object would be aptly expressible by it with the phrase “that the temperature of me is 300K”.

  3. Necessarily, anything that can have a conscious awareness whose object would be aptly expressible with the phrase “that the temperature of me is 300K” is self-conscious.

  4. So, if a computer can have consciousness and intentionality, a computer can have self-consciousness.

Premise 1 is very plausible: after all, the most plausible story about what a conscious computer would be aware of is immediate environmental data through its sensors. Premise 2 is, I think, also plausible for two reasons. First, it’s hard to see why awareness whose object is expressible in terms of “here” would be harder than awareness whose object is expressible in terms of “I”. That’s a bit weak. But, second, it is plausible that the relevant sense of “here” reduces to “I”: “the place I am”. And if I have the awareness that the temperature in the place I am is 300K, barring some specific blockage, I have the cognitive skills to be aware that my temperature is 300K (though I may need a different kind of temperature sensor).

Premise 3 is, I think, the rub. My acceptance of premise 3 may simply be due to my puzzlement as to what self-consciousness is beyond an awareness of oneself as having certain properties. Here’s a possibility, though. Maybe self-consciousness is awareness of one’s soul. And we can now argue:

  1. A computer can only have a conscious awareness of what physical sensors deliver.

  2. Even if a computer has a soul, no physical sensor delivers awareness of any soul.

  3. So, no computer can have a conscious awareness of its soul.

But I think (5) may be false. Conscious entities are sometimes aware of things by means of sensations of mere correlates of the thing they sense. For instance, a conscious computer can be aware of the time by means of a sensation of a mere correlate—data from its inner clock.

Perhaps, though, self-consciousness is not so much awareness of one’s soul, as a grasp of the correct metaphysics of the self, a knowledge that one has a soul, etc. If so, then materialists don’t have self-consciousness, which is absurd.

All in all, I don’t see self-consciousness as much of an additional problem for strong artificial intelligence. But of course I do think that consciousness and intentionality are big problems.

Monday, July 24, 2017

Death, harm and time

For the sake of this post, stipulate death to be permanent cessation of existence. Epicurus famously argues that death is not a harm to one, because the living aren’t harmed by death while the dead do not exist.

As formulated, the argument appears to require presentism—the view that only presently existing things exist. If eternalism or growing block is true, the dead would exist, albeit pastly. This would give us a nice little argument against presentism:

  1. If presentism is true, the Epicurean argument is sound. (Premise)

  2. The conclusion of the Epicurean argument—namely, that death is not a harm—is absurd. (Premise)

  3. So, presentism is false.

But things aren’t quite so simple, because one can reconstruct an Epicurean argument without presentism.

  1. One is intrinsically harmed by x iff there is a time t at which one is intrinsically harmed by x. (Premise)

  2. One is intrinsically harmed at t by x only if one exists at t. (Premise)

  3. One is not intrinsically harmed by death at any time at which one exists. (Premise)

  4. One is not intrinsically harmed by death at any time. (5 and 6)

  5. One is not intrinsically harmed by death. (4 and 7)

This argument distinguishes intrinsic from extrinsic harm. Here’s an illustration of the distinction I have in mind: if I lose a finger, that’s an intrinsic harm; if people say bad things about me behind my back, that’s an extrinsic harm—unless it causally impacts me in some negative way. Epicurus didn’t seem to think there was such a thing as extrinsic harm, so he formulated his argument in terms of harm as such. But, really, his argument was only plausible with respect to intrinsic harm, in that a no longer existent person certainly could suffer extrinsic harms, say by losing reputation or having loved ones suffer harm. And the conclusion that death is not an intrinsic harm is implausible enough. Death seems to be among the worst of the intrinsic harms. (In particular, I think my little argument against presentism remains a good one even if we weaken the conclusion of the Epicurean argument to say that death is not an intrinsic harm.)

Of course, the conclusion (8) is still false! So which premise is false?

Here is a pretty convincing argument for (5):

  1. One is intrinsically harmed at t by x only if has or lacks an intrinsic property at t because of x. (Premise)

  2. One does not have or lack any intrinsic properties at times when one doesn’t exist. (Premise)

  3. So, (5) is true.

Premise (6) is also pretty plausible.

Premise (4) is also plausible.

But there is a way out of the argument. If four-dimensionalism is true, we have a good way to reject (4). Consider first the spatial analogue of (4):

  1. One is intrinsically harmed by x if and only if there is a point z in space at which one is intrinsically harmed by x.

But (12) is implausible. Consider a spherical plant that suffers the harm of being made cylindrical. To be distorted into an unnatural shape seems to be an intrinsic harm. But it need not an intrinsic harm locatable at any point in space. At any point in space where the plant is not, surely it’s not harmed. At points where the plant is, it might be harmed—say, by the stresses induced by the unnatural shape—but it need not be. We could, in fact, suppose that the plant is nowhere stressed, etc. The harm is simply the intrinsic harm of being deformed. For another example, suppose materialism is true, and consider an animal in pain. The pain is an intrinsic harm, plausibly, but there is no harm at any single point of the brain—only at a larger chunk of the brain.

What the examples show is that spatially extended objects can be intrinsically harmed in respect of properties that cannot be localized to a single point. If four-dimensionalism is true, we are also temporally extended. We should then expect the possibility of being intrinsically harmed in respect of properties that cannot be localized to a single instant of time, and hence we should not believe (4). And death seems to be precisely such a case: one is harmed by having only a finite extent in the temporally forward direction. This could be just as much an intrinsic harm as being spatially distorted.

In fact, once we see the analogy between harm not located at a point of space and harm not located at a point of time, it is easy to find other counterexamples to (4). Consider a life of unremitting boredom. Suppose someone lives from t1 to t2 and is bored at every time. At every time t between t1 and t2 she suffers the intrinsic harm of being bored; but she has the additional temporally non-punctual intrinsic harm of being always bored. Or suppose that materialism is true. Then just as pains do not happen in respect of properties at a single spatial point, they probably do not happen in respect of properties at a single instant either: pain likely requires a sequence of neural events.

In fact, the multiplication of examples is sufficiently easy that even apart from the more abstruse question of the harms of death, someone whose theory of time or persistence forces her to endorse (4) is in trouble.

But on reflection, the moves against three-dimensionalism and maybe even presentism were too quick. Maybe even the presentist can say that we have intrinsic properties which hold in virtue of how we are over a temporally extended period of time.

Thursday, July 20, 2017

Life in the interim state and the nature of time

Assume this thesis:

  1. We go out of existence at death and return to existence at the resurrection.

Suppose, further, that:

  1. There is a last moment t1 of earthly life and a first moment t2 of resurrected life.


  1. If there are no intervening moments of time between t1 and t2, one is never dead.

  2. Whether there are any intervening moments of time between t1 and t2 depends on what happens to things other than one.

  3. So, whether one is ever dead depends on what happens to things other than one.

  4. So, whether one is ever dead is extrinsic to one.

But that’s absurd in itself, plus it implies the absurdity that death is only an extrinsic harm. So, we should reject 1. We exist between death and the resurrection.

There are two controversial assumptions in the argument: 2 and 4. Assumption 4 follows from an Aristotelian picture of time as consisting in the changes of things. Since one doesn’t exist between t1 and t2, those changes would have to be happening to things other than oneself. If one doesn’t accept the Aristotelian picture of time, it’s much harder to argue for 4.

Assumption 2 is obviously true if time is discrete. If time is continuous, it might or might not be true. For instance, it could be that one lives from time 0 to time 100, both inclusive, in which case t1 = 100, but it could also be that one lives from time 0 to time 100, non-inclusive, in which case t1 doesn’t exist. Similarly, one could be resurrected from time 3000, inclusive, to time infinity, non-inclusive, in which case t2 = 3000, but it could also be that one is resurrected from time 3000, non-inclusive, in which case t2 doesn’t exist.

However, even in the continuous case the argument has some force. For, first of all, it’s obvious that death is an intrinsic harm to us, and that obviousness does not depend on obscure details about whether the intervals of one’s life include their endpoints. Second, it is at least metaphysically possible for 1 to hold. But then in a world where 1 were to hold, our death would be merely an extrinsic harm to us, which would still be absurd.

AI and ontology

  1. Only things that exist think.

  2. Only simples and living things exist. (Cf. van Inwagen and Aristotle.)

  3. Computers are neither simple nor alive.

  4. So, computers don’t think.

Monday, July 17, 2017

Computer consciousness and dualism

Would building and running a sufficiently “smart” computer produce consciousness?

Suppose that one is impressed by the arguments for dualism, whether of the hylomorphic or Cartesian variety. Then one will think that a mere computer couldn’t be conscious. But that doesn’t settle the consciousness question. For, perhaps, if one built and ran a sufficiently “smart” computer (i.e., one with sufficient information processing capacity for consciousness), a soul would come into being. It wouldn’t be a mere computer any more.

Basically the thought here supposes that something like the following is a law of nature or a non-coincidental regularity in divine soul-creation practice:

  1. When matter comes to be arranged in a way that could engage in the kind of information processing that is involved in consciousness, a soul comes into existence.

Interestingly, though, a contemporary hylomorphist has very good reason to deny (1). The contemporary
hylomorphist thinks that the soul of an animal comes into existence at the beginning of the animal’s existence as an animal. Now consider a higher animal, say Rover. When Rover comes into existence as an animal out of a sperm and an egg, its matter is not arranged in a way capable of supporting the kind of information processing involved in consciousness. Yet that is when it acquires its soul. When finally the embryo grows a brain capable of this kind of information processing, no second soul comes into existence and hence (1) is false. (I am talking here of contemporary hylomorphists; Aristotle and Aquinas both believed in delayed ensoulement which would complicated the argument, and perhaps even undercut it.) The same argument will apply to those Cartesian dualists who are willing to admit that they were once embryos without brains.

Perhaps one could modify (1) to:

  1. When matter comes to be arranged in a way that could engage in the kind of information processing that is involved in consciousness and a soul has not already come into existence, then a soul comes into existence.

But notice now two things. First, (2) sounds ad hoc. Second, we lack inductive evidence for (2). We know of no cases where the antecedent of (2) is true. If we were to generate a computer with the right kind of information processing capabilities, we would know that the antecedent of (2) is true, but we would have no idea if the consequent is true. Third, our observations of the world so far all fit with the following generalization:

  1. Among material things, consciousness only occurs in living things.

But a “smart” computer would still not be likely to be a living thing. If it were, we would expect there to be non-“smart” computers that are alive, by analogy to how just as there are conscious living things, there are unconscious ones. But it is not plausible that there would be computers that are alive but not “smart” enough to be conscious. One might as well think that the laptop I am writing this on will be conscious.

This isn’t a definitive refutation of (2). God has the power to (speaking loosely) provide an appropriately complex computer with a soul that gives rise to consciousness. But inductive generalization from how the world is so far gives us little reason to think he would.

Sunday, July 16, 2017

Informed organs surviving the death of an individual

In my last post, I offered a puzzle, one way out of which was to accept the possibility of informed bits of an animal surviving the death of the animal. But the puzzle involved a contrived case--a snake that was annihilated.

But I can do the same story in a much more ordinary context. Jones is lying on his back in bed, legs stretched out, with healthy feet, and dies of some brain or heart problem. How does the form (=soul) leave his body? Well, there are many stories we can tell. But here's one thing that's clear: the form does not leave the toes before leaving the rest of the body. I.e., either the toes die (=are abandoned by the form) last or they die simultaneously with the rest. But in either case, then Special Relativity and the geometry of the body (the fact that one can draw a plane such that one or more toes are on one side of the plane, and the rest of the body is on the other) imply that there is a reference frame in which the form leaves one or more of the toes last. Thus, there will be a reference frame and a time at which only toes or parts of toes are informed. It is implausible to think that one is alive if all that's left alive are the toes. So organs can survive death while informed by the individual's form.

Friday, July 14, 2017

Snake annihilation and partial death

The following five principles seem to be rationally incompatible:

  1. Every part of a living organism is informed by its form.

  2. If any part of an organism is informed by its form, the organism is alive.

  3. An snake would be dead if everything but the tailmost one percent of its length were annihilated.

  4. Simultaneity is relative, as described by Special Relativity.

  5. Being informed by a form is not relative to a reference frame.

To see the incompatibility, consider this case. A snake of ordinary proportions is lying stretched out in a line and is then instantaneously completely annihilated. Notice an interesting fact about this snake:

  1. Every bit of this snake is informed by the form of the snake whenever it exists.

This follows from (1) and the setup of the situation. Note that (6) will not be true in the case of snakes that meet a more ordinary end than by complete instant annihilation: those snakes leave behind parts that are no longer informed (they may be parts only in a manner of speaking, but I think nothing in my argument hangs on this). It is to make (6) true that I supposed the snake annihilated instantaneously.

Now, by (4), the claim that the snake is must have been said with respect to some reference frame F1. But it follows from Special Relativity and the geometry of linear snakes that there will be a reference frame F2 relative to which the snake is annihilated gradually from the head to the tail rather than simultaneously. There will thus be a time t2 such that relative to F2 at t2 the snake has been annihilated except for the tailmost one percent. At t2 relative to F2, that tailmost one percent is informed by the form of the snake by (5) and (6). By (2), the snake is alive at t2 relative to F2. But by (3), it is dead at t2 relative to F2. So, the snake is both alive and dead at t2 relative to F2, which is absurd.

I am not sure what to do about this argument. I feel pushed to deny (2). Perhaps something could be dead simpliciter but still have living parts. But that’s an uncomfortble position.

Life and non-life

Assume a particle-based fundamental physics. Then the non-living things in the universe outnumber the living by many orders of magnitude. But here is a striking fact given a restricted compositionality like van Inwagen’s, Toner’s or mine on which all there are is in the universe are particles and organisms: the number of kinds of living things outnumbers the number of kinds of non-living things by several orders of magnitude. The number of kinds of particles is of the order of 100, but there are millions of biological species (they may not all correspond to metaphysical species, of course).

Counting by individuals, living things are exceptional. But counting by kinds, physical things are exceptional. Only a tiny portion of the universe is occupied by life. But on the other hand, only a tiny portion of the space of kinds of entities is occupied by non-life.

I am not sure what to make of these observations. Maybe it is gives some credence to an Aristotelian rather than Humean way of seeing the world by putting the the kinds of features as teleology that are found in living things at the center of metaphysics.

Thursday, July 13, 2017

Preponderance of evidence

I do formal epistemology, but I am no legal scholar, so this could be a complete misunderstanding. It is my understanding that in civil cases a preponderance of evidence standard is used on which the evidence needs to support the conclusion with a probability merely greater than 1/2. This seems ridiculous in cases where one is seeking compensation for damages that may or may not have occurred.

Suppose I run a business, and I treat my staff somewhat shabbily but not actionably. One day, hundreds of dollars worth of damage occurs in the server room. Review of blurry security camera footage, building security logs and other data proves beyond reasonable doubt the following facts:

  • A thin stocking was put over the camera, hence the blur.

  • There were five employees in the offices at the time, all of whom had a similar build and appearance: Alfred, Bill, Carl, David and Edgar.

  • Three of the employees went to the bathroom and returned with buckets full of water which they poured over the servers.

  • The other two employees did their best to stop the three, including calling 911 and heroically trying to block the door to the server room. As a result of the scuffle, everybody’s fingerprints are on the buckets and everybody is wet.

  • Each employee claims with equal credibility that he was one of the two trying to stop the attack. Moreover, everybody claims to be unable to identify who the “other” employee trying to stop the attack is. The video footage shows a scene of such confusion that this inability to identify is unsurprising.

So, I fire all five employees and then sue each of the five individually for damages. I argue in the case of each employee that the evidence clearly yields a 3/5 probability that he was responsible for damage, and remind the court that 3/5 > 1/2.

But surely it would be a serious miscarriage of justice for all five to be held liable for damages that two of the five sought to prevent.

I wonder if cases like this get their force solely from the fact that the probabilities involved—namely, 3/5—are low, or if there is something else going on. Suppose I had a thousand employees, and 999 were damaging company property while one was trying to stop it. Should I be able to sue all 1000, correctly claiming a probability of 999/1000 of responsibility in each case, while knowing for sure that a judgment in my favor in all 1000 cases will place a severe financial burden on exactly one innocent person?

That is an uncomfortable conclusion, but perhaps we should bite the bullet and say that this is no different from a court knowing that over the run of many cases, there will be a small minority where innocents are burdened with grave burdens—and the risk of suffering such burdens is just part of the cost of membership in the society, much as being subject to the draft is.

But it seems much more uncomfortable to say something like this in the 3/5 case—or a 51/100 case—than in a 999/1000 case.

Naive intuition: The evidence needed should scale with the burden to the defendant in the case of a finding against them. Maybe the evidence requirements do thus scale in practice. Like I said, I am no legal scholar.

Love and happiness

Could perfect happiness consist of perfect love?

Here’s a line of argument that it couldn’t. Constitutively central to love are the desire for the beloved’s good and for union with the beloved. A love is no less perfect when its constitutive desires are unfulfilled. But perfect happiness surely cannot be even partly constituted by unfulfilled desires. If perfect happiness consistent of perfect love, then one could have a perfect happiness constituted at least partly by unfulfilled desires.

When this argument first occurred to me a couple of hours ago, I thought it settled the question. But it doesn’t quite. For there is a special case where a perfect love’s constitutive desires are always fulfilled, namely when the object of the love is necessarily in a perfectly good state, so that the desire for the beloved’s good is necessarily fulfilled, and when the union proper to the love is of such a sort that it exists whenever the love does. Both of these conditions might be thought to be satisfied when the object of love is God. Certainly, a desire for God’s good is always fulfilled. Moreover, although perfect love is compatible with imperfect union in the case of finite objects of love, perfect love of God may itself be a perfect union with God. If so, then our happiness could consist in perfect love for God.

I am not sure the response to the argument works but I am also not sure it doesn’t work. But at least, I think, my initial argument does establish this thesis:

  • If perfect happiness consists of perfect love, it consists of perfect love for God.

Of course none of the above poses any difficulty for someone who thinks that perfect happiness consists of fulfilled perfect love.

Tuesday, July 11, 2017

Special Relativity and physicalism

There is, I think, an underexplored argument against physicalism on the basis of Special Relativity and the unity of apperception.

The unity of apperception seems to imply that there is always a non-relative fact of the matter whether two perceptions are co-perceived: whether I am feeling cold at the same time as I am seeing a red cube, say. (Einstein’s own definition of simultaneity presupposes this: he defines the simultaneity of two distant events in terms of the co-perception of light beams from them.) When two perceptions are co-perceived, they are simultaneous. So there must be a non-relative simultaneity in the mind. But it is very unlikely that all co-perceived perceptions are grounded in exactly the same place in the brain. And simultaneity between physical events happening at different locations is always relative. So perceptions aren’t physical events.

I don’t think this is a very strong argument, though. It’s open to the physicalist to say that perceptual time is different from physical time, and perceptual simultaneity need not correspond to physical simultaneity. The best version of physicalism is functionalism. Now imagine embedding a causally isomorphic copy of Napoleon in a universe with four spatial and one temporal dimension, but in such a way that all of the four-dimensional life of Napoleon is realized within the four spatial dimensions, at a single temporal instant. The three spatial dimensions of Napoleon would be realized within three spatial dimensions, and the temporal dimension of Napoleon would be realized within the fourth spatial dimension. All the diachronic causation in the life of our world’s Napoleon becomes simultaneous causation in the new world. All of the life of the Napoleon-copy is then lived at a single instant of physical time, but it has all of the causal richness that Napoleon’s life had, and it is causally isomorphic to Napoleon. It is plausible, then, that the functionalist will say that Napoleon-copy has the same mental life as Napoleon. But Napoleon-copy’s mental life is all at once physically. So the functionalist can say that mental time is not the same as physical time—without budging from physicalism.

Now, I think some people will find this kind of a separation between physical time and mental time to be unacceptable. If so, then they shouldn’t be physicalists. I myself am not a physicalist, but I find the separation between physical and mental time quite plausible. After all, don’t we say that sometimes time runs faster than at other times?

Monday, July 10, 2017

Permissibility of the natural

The usual way to argue that an action is permissible is to argue that the arguments against the action’s permissibility fail. But it would be really nice to be able to give a more positive argument for an action’s permissibility. Sometimes one can do so by showing that the action is obligatory, but (a) that doesn’t help with the permissibility of non-obligatory actions, and (b) often an argument for the obligatoriness of a positive action presupposes the action’s permissibility (e.g., the obligation to kill a dog that is attacking one’s child when no other means of defense is available presupposes the general permissibility of killing dogs with good reason).

Here is a place where Natural Law (NL) can provide something quite useful, namely this principle:

  1. If A is a natural action, then normally A is permissible.

This principle could, for instance, be used to generate intuitively compelling positive arguments for such controversial theses as:

  1. It is normally permissible to eat animals.

  2. It is normally permissible for us to reproduce.

  3. It is normally permissible for us to prefer those more closely related to us.

In addition to Natural Lawyers, theists in general might have reason to endorse (1), on the grounds that our nature comes from God.

Of course, there is always going to be a difficulty in determining whether the antecedent of (1) is true.

Non-theistic non-NL theories are unlikely to endorse (1) except as a rule of thumb. And it will be an interesting explanatory question on those theories why then (1) is true even as a rule of thumb.

Sunday, July 9, 2017

Infima species

There is a classic controversy in interpreting Aristotle: Is there one form per individual or one form per species?

One of the main arguments for individual forms is that the form of the human being is the soul, and it would be crazy to think that you and I have the same soul.

But what if—though this is surely not what Aristotle thought—the truth were this: There is one form per species, but humans, unlike other organisms, are each their own species (much as Aquinas thought the angels were).

This creates a discontinuity between non-human and human animals. This discontinuity is in itself a disadvantage of the view—it makes things more complicated.

However, at the same time the discontinuity would correspond nicely with some ethical intuitions. It wouldn’t be reasonable for a human to sacrifice her life for a Komodo dragon. But it could be reasonable for her to sacrifice her life for the Komodo dragon species. The view also fits with the widespread, though far from universal, intuition that it is permissible to kill non-human animals for food, but that the killing of a human being is a morally far weightier thing. Moreover, the idea that humans are infima species seems to capture important things about human individuality (I am grateful to Richard Gale for this observation), including the idea that while there is a teleological commonality between human beings, it is also the case that individual humans have individual vocations, telè that are their own only.

The main disadvantage of the view is theological. In Athanasian soteriology, it is crucial that Christ is metaphysically same species as we are. But one might hope that a Christology could be modified where being of the same genus would play the same role as being of the same species does for St Athanasius—or perhaps one where what plays the role is just the fact of a shared rational animality (which we also share with any non-human rational animals outside of the Solar System).

I don’t think the view is true, because the radical discontinuity the view posits between non-human and human animals just seems wrong. But I think there is more to be said for this view than is generally thought. And for those who think that they are not animals—for instance, people who think that they are constituted by an animal—the view seems even better.

Friday, July 7, 2017

Immaterial body parts

Here’s a difficult question: Does an artificial heart literally become a body part of the patient?

And here’s a line of thought suggestive of a negative answer.

  1. Necessarily, all our body parts are material.

  2. If one could have an artificial heart as a body part, one could have an immaterial artificial heart as a body part.

  3. So, one cannot have an artificial heart as a body part.

Why accept 2? Because presumably what makes an artificial heart suitable for being a body part is that it does the job of a heart. But we could imagine an immaterial being which does the job of a heart. For instance, an angel could move blood around the body, and do so in response to electrical activity in the brain stem. Perhaps one could say that an angel couldn't be a body part, because it is already an intelligent being. But we could then imagine something that moves blood around like the angel but doesn’t have a mind.

I am not so confident of premise 1, however. One could, I suppose, turn the argument around: An artificial heart could be a body part, so possibly some of our body parts are immaterial. And if that’s right, then given a view on which body parts are informed by the form of the person, we would have the further interesting conclusion that a form can inform something that isn’t matter.

Minds don't think

  1. Only things with minds think.

  2. Minds don’t have minds.

  3. So, minds don’t think.

Corollary: We think with minds, hence we are not minds.

There might be an exception to (2) in the case of God. By divine simplicity, God is his own mind. So God's mind has a mind, namely itself.

Wednesday, July 5, 2017

Against nihilism

Argument A:

  1. Necessarily, if there is nothing, it is impossible that anything exists.

  2. Something exists.

  3. So, by Brouwer Axiom, necessarily possibly something exists.

  4. So, the consequent of (1) is impossible.

  5. So, it is impossible that there is nothing.

The most controversial premise in this argument is (1). Premise (1) follows from a picture of modality on which possibility is prior to necessity, and the possibility of non-actual things is grounded in possibilifiers. Absent possibilifiers, nothing is possible. But suppose that instead we like a picture of modality as grounded in necessitators. Then instead we have this argument.

Argument B:

  1. Necessarily, if there is nothing, no proposition is necessary.

  2. It’s necessary that it’s necessary that 2+2=4. (Obvious, or else a consequence of S4 and the fact that it’s necessary that 2+2=4.)

  3. So the consequent of (6) is impossible.

  4. So, it is impossible that there is nothing.

And finally we have:

Argument C:

  1. Necessarily, if there is nothing, either it is impossible that anything exists or no proposition is necessary.

  2. Necessarily possibly something exists. (Premise (3))

  3. It’s neccessary that it’s necessary that 2+2=4. (Premise (7))

  4. So, the consequent of (10) is impossible.

  5. So, it is impossible that there is nothing.

Everything is beautiful

Consider something visually ugly, say one of my school painting projects. The colors are poorly chosen and the lines don’t do a good job representing what it’s meant to represent. (I am not being modest.)

But now suppose we live in an infinite universe or a multiverse, so that every possible intelligent species is realized. It is very likely that there will be some intelligent species whose electromagnetic spectral receptivities are such that the colors in the lines look gorgeous to it, and harmonize in a wonderful abstract way with the shape of the lines. This is, of course, a chance matter—I wasn’t making the painting for that mode of visual receptivity. Let’s say that the species is the xyllians. We can still say that what I made is an ugly work of art, but it is also a part of the natural world, and considered as a part of the natural world it is visuallyx (i.e., as seen with the electromagnetic reception apparatus of xyllians) beautiful while being visuallyh (i.e., as seen with human electromagnetic reception apparatus) beautiful.

Moreover, it is irrelevant whether the xyllians and humans exist. Whether they exist or not, my painting is visuallyx beautiful and visuallyh ugly. All that’s needed is that the xyllians and humans could exist. Thus, my painting really is both beautiful and ugly, even if we are the only intelligent species. And it is just as objectively beautiful as it is objectively ugly. I wasn’t supposing that the xyllians misperceive: just that they have a different pattern of spectral receptivities. We can suppose that xyllian visual perception is just as accurate in reflecting the world, including my unhappy artistic productions, as ours is.

This means that an argument from particular beauty for the existence of God must be run cautiously. Sure, sunsets and goldfish are beautiful. But so is any child’s scrawl, and quite likely any physical object is beautiful with respect to some possible sensory apparatus. Particular instances of beauty are easy to find and should not surprise us. What could surprise us, however, is:

  1. That the particular sensorily beautiful things around us—such as sunsets and goldfish—are in fact beautiful with respect to the sensory apparatus of the intelligent species that dwells near them.

We might also attempt to mount arguments from beauty to God on the basis of these remarkable facts:

  1. That there is such a property as (objective) beauty at all.

  2. That we are able to perceive beauty.

  3. That we enjoy beauty.

  4. That we are able to make correct judgments of beauty.

And bracketing the question of arguing for the existence of God on the basis of beauty, the realization that all material things are beautiful should lead us to glorify God. For while I said that it’s chance that my poor attempts at painting are visuallyx beautiful, that’s only so loosely speaking. God is omnirational, and that the paintings are visuallyx beautiful is a redeeming quality that surely God did not fail to intend.

Friday, June 30, 2017

A curious bug

Here's a bug I haven't had before: my code broke because something else was improved. I had some Arduino code that needed to display text on a two-line LCD. Everything worked a couple of months ago. Today I made some changes irrelevant to the screen display code, and the screen display started omitting characters. I got worried it was a hardware issue of some sort, but my best guess what happened was this: The Arduino IDE was reinstalled, and the new version must have optimized something. As a result, the code ran faster than before, and the LCD couldn't keep up with it. To fix, I had to add about five microseconds extra delay per byte being sent to the LCD.

If this wasn't microcontroller code, one would use a timer rather than a hardcoded delay, and the problem wouldn't occur. But using a timer would likely be less efficient, and I wanted maximum efficiency in this part of the code. I'm not used to dealing with hardware at this low level, and so I'm not used to "too fast" being a problem (except with user interaction).

The variety of beauty

A crucial part of Diotima’s ladder is the progress from sensible beauty to the non-sensible beauty of mind, law and mathematics. From time to time I’m struck by how very strange it is that such very different things as paintings, faces, poems, minds and theorems have beauty in common.

If one has a view of beauty as that which gives a certain “aesthetic pleasure”, it’s easy to explain this: it is not that surprising that different inputs could give rise to the same kind of pleasure. But that view of beauty is false. (We would not make my preschool scribbles more beautiful than Monet’s mature paintings by brainwashing people into taking more aesthetic pleasure in the former than in the latter.)

Plato’s famous explanation is that all these different things participate in the same form. But that leaves mysterious why it is that a painting that exhibits a certain harmonious play of colors and a theorem that is illuminating and unifying in a certain way both end up necessarily participating in the form of beauty. There needs to be a connection between the configurations that give rise to beauty and the participation in the form of beauty. The historical Plato seems to have thought that there was a common mathematical structure in all these configurations, but this seems quite implausible given the great variability of them.

Perhaps a theistic explanation can make some progress. All beauty is a participation in God. But God is infinitely beyond all else, so this participation is from an infinite distance, and it is not so surprising that the infinite richness of God can be participated in in infinitely many different ways.

The difficulty with this explanation is that beauty is not the only property that’s a participation in God. Every positive property is a participation in God. And some positive properties—say, knowledge—are much more unified than beauty. Perhaps it helps, though, to have the medieval view that beauty, goodness and being are all in some sense interchangeable. So perhaps every participation in God constitutes beauty, and so the great variety of participations in God gives rise to the great variety of types of beauty.

Wednesday, June 28, 2017

Intention and credence

In a paper on Double Effect, I offer this kind of an example. Jim has sneaked into a zoo on a mission to kill the first mammal he sees at the zoo, because a very rich eccentric has informed him that he’d give a very large sum of money to famine relief if Jim did that. Jim sees the zookeeper and kills him, reasoning that zookeepers are mammals, and hence the kill will satisfy the eccentric’s condition. In the paper, I argued that Jim need not be intending to kill a human being even if he knows the zookeeper is a human being. His intention need simply be to kill that mammal. Of course, this is still a murder, and hence I argue that the Principle of Double Effect should not be formulated in the classical way in terms of intentions.

I think a lot of people are incredulous of my claim that Jim can know that the mammal he is shooting is a human being and yet not intend to be killing a human being. It’s just occurred to me that there may be a way to help overcome that incredulity by making the story more gradual. Jim first sees a shadowy figure in the dark in the primate enclosure very far away. He assumes it’s an ape, and aims his rifle. However, he doesn’t want to miss, so he comes a couple of steps closer. As he gradually approaches, he has a very vague impression that there is something a little human-like about the movements of that primate. He thinks to himself, however, that apes are close relatives to humans, so it’s almost certainly still an ape. But as he approaches, his evidence that what is before him is a human rather than an ape increases. Finally, by the time he’s close enough to shoot, the evidence is conclusive: he knows it’s a human. But he doesn’t care a whit—the only thing that matters to this callous individual is that it’s a mammal. So he shoots and murders.

Let’s suppose that Jim’s credence that the mammal is human goes from 0.0001 to 0.9999 as he walks forward. At the 0.0001 point, it’s clearly not Jim’s intention to kill a human being. Nor at the 0.5000 point. Nor even at the 0.5001 point. Could it be that Jim’s intention becomes one to kill a human being once his credence gets high enough for him to count as believing, or maybe even knowing, that this is a human being? But it is implausible that a merely numerical increase in the credence suddenly forces a change in Jim’s intention. Intention just does not seem to be degreed in a way that lines up with the degreed nature of Jim’s credence.

So, what should we say? I think it is this: Whether Jim’s credence was 0.0001 or 0.9999 at the time of the shot, as long as he was acting callously and not caring about whether the victim is ape or human, he accomplished the death of a human being. This accomplishment (or something close to it) makes him a murderer. Of course, at the 0.0001 credence point, it would be hard to prove in a court of law that he accomplished the death, that he shot without caring whether the victim is ape or human, caring only that the victim was a mammal.

Tuesday, June 27, 2017

Set size and paradox

Some people want to be able to compare the sizes of sets in a way that respects the principle:

  1. If A is a proper subset of B, then A ≤ B but not B ≤ A.

They do this in order to escape what they think are paradoxical consequences of the Cantorian way of comparing sizes. But from one paradox they fall into another. For the following can be proved without the Axiom of Choice:

  1. If there is a transitive and reflexive relation ≤ between sets of reals (or just countable sets of reals) that satisfies (1), then the Banach-Tarski Paradox holds.
And the Banach-Tarski Paradox is arguably more paradoxical than the paradoxes of infinity that (1) is supposed to avoid.

Anthropomorphism and theism

Sometimes theists are accused of anthropomorphism in their concept of God. But it is important to note that theists hold that God is the entity least like humans. Rocks are closer to us in intellectual capacity than God is. Amoebae are more like us in love than God is. Wet noodles resembles us in power more than God does. All creatures are more like one another than they are like God.

Of course, even if God is the entity least like humans, humans could be the entities most like God. But typical religious theists think even that is false: the angels are more like God than humans are.

None of this denies that there are particular (and controversial) theological views that may suffer from an undue anthropomorphism. I suspect certain motivations for taking God to be mutable to be like that.

Monday, June 26, 2017

Command ethics

I think one of the most powerful objections to divine command theory is MacIntyre’s question as to which divine attributes make it be the case that the obligatory is what God commands. It’s not God’s creating us: for imagine a naturalistic universe where a crazy scientist creates people—surely the crazy scientist’s commands do not constitute obligations. It’s not God’s being omnipotent—that just seems irrelevant. Omniscience also doesn’t seem to help. Etc.

Here’s a theory that just occurred to me which avoids this problem:

  • the obligatory is what is validly commanded by someone.

This is a command theory instead of a divine command theory. The difficulty with this theory is giving an account of a valid command that does not proceed by saying that a valid command is one that it is obligatory to obey. Perhaps, though, one could suppose that there is a fundamental property of non-derivative authority (actually, a relational property: non-derivative authority over x with respect to R) that some persons have. For instance, God has this property in a very broad and non-derivative way, but God might not be the only one (maybe parents have it with respect to children, and governments with respect to people). This theory solves the MacIntyre problem with divine command theory. And while there is a cost to having a primitive account of non-derivative authority, there is some reason to think that even if we grounded obligations in something other than commands, we might still have to take non-derivative authority to be primitive.

Of course, without God the command theory is just implausible: clearly there are ordinary obligations we have that do not come from the commands of other ordinary persons.

I certainly don’t endorse the theory. But it’s worth thinking about, and in particular it’s worth thinking whether it’s not superior to divine command theory.

Friday, June 23, 2017

The unknown mechanism of action of the IUD

A fellow philosopher just sent me this very interesting quote from an article in a reputable medical journal:
[I]f it was conclusively shown that the sole or principal mode of action [of the IUD] was to prevent the embryo from implanting, then this method, as in the case with emergency contraception, would be considered by the Roman Catholic church as causing an early abortion. As a result many agencies involved in the research, development or delivery of contraception prefer to leave the mechanism of action issue unresolved, which may explain why research into the contraceptive mechanisms of IUDs has been sparse in the last 20 years.

The quote’s invocation of politics fits with vague suspicions I had.

But in any case, I wonder whether leaving the “the mechanism of action issue unresolved” helps all that much morally. Suppose that prevention of implantation is morally on par with paradigmatic cases of killing an adult human. Now consider this story. You are a doctor on board a spaceship marooned on an alien planet. All your drugs have been destroyed but one of your patients is suffering severe pain. The aliens have a callous attitude to human life, but in exchange for a piece of fine art they offer you a drug. The aliens always tell the truth and they guarantee that the drug “terminates the pain.” But when you ask them about the mechanism by which it does so, they say: “Trade secret. It terminates the pain.” You try asking more general questions like: “Does it suppress pain signals in the brain?” They just say: “That would terminate the pain. It terminates the pain. Why ask more?” Then someone else in your crew asks: “Does it terminate the patient?” And the aliens say: “That would terminate the pain. It terminates the pain. Why ask more?”

The end result is that you have no idea whether the drug terminates the pain by suppressing the pain as such or by killing the patient. It is clear that in that case we should not use the drug, except as a last-ditch hope for a patient who is already dying. (I am not saying it is acceptable to kill someone who is already dying. But if someone is already dying, then one can tolerate a greater risk of unintended death.)

I am not saying, of course, that we need to find evidence against every crazy hypothesis. There is, after all, the hypothesis that ibuprofen works by annihilating the patient and calling in aliens that replace the patient with a pain-free simulacrum. The tiny but non-zero probability of that hypothesis should not keep us from using ibuprofen. But when we do not know how some drug or procedure works, and one of the serious hypotheses is that it works by killing someone, then that’s a problem.

Given the callousness of the aliens, the hypothesis that they are offering a euthanasia drug is a serious hypothesis. Likewise, the hypothesis that the IUD works primarily by preventing implantation is a serious hypothesis (see the suggestive evidence in the above-quoted paper). In both cases, then, unless we can find significant evidence against this serious hypothesis, the use of the drug or method is wrong (except perhaps in exceptional cases).

We rightly have a guilty-until-proved-innocent approach to medical interventions. Apart perhaps from exceptional cases (e.g., terminal ones), a medical intervention must be tested for its effects on the directly affected parties. The manufacturer's failure to gather data on the effects of the IUD on some of the directly affected parties, namely the embryos, means that the IUD has not been tested up to the morally required standards of testing medical interventions, and hence cannot be licitly used (apart perhaps from some exceptional cases), even absent the data that we have that is suggestive of fatal effects on those parties.

Abortifacient effects of contraception and the Principle of Double Effect

Suppose that a contraceptive has the following properties:

  • Fewer than 1% of users have a pregnancy annually.

  • At least 5% of users annually experience a cycle where the contraceptive fails to prevent fertilization but does prevent implantation.

I think there is good empirical reason to think there are such contraceptives on the market. But that’s a matter for another post. Here I want to look at just the ethics question. So let’s suppose that the above stipulated properties obtain, and in fact that they are known to obtain.

The cases where the contraceptive prevents implantation are cases where the contraceptive kills an early embryo: in short, they are cases where the contraceptive is being abortifacient. The question I want to address in this post is this: Could someone who thinks early embryos have whatever property (personhood, membership in the human race, the imago dei, the possession of the soul, etc.) that makes it paradigmatically wrong to kill adult human beings nonetheless defend the contraceptive on the grounds that the deaths due to implantation-prevention are just an unintended and unfortunate side-effect?

Basically, the defense being envisioned would invoke some version of the Principle of Double Effect, which allows for some actions that have a bad side-effect that isn’t intended as a means or as an end. Of course, Double Effect requires that there not be other reasons why the action is wrong. But let’s bracket the question—which I address at length in my One Body book—whether there are other reasons the contraceptive could be wrong to use, and just focus on the abortifacient effect.

We can ask the question from two points of view:

  1. Can the manufacturer justify the production of the contraceptive on the grounds that failures of implantation are just an unfortunate side-effect?

  2. Can the user justify the use on those grounds?

Regarding 1, here’s a thought. For the contraceptive to be competitive, it has to be highly effective. If one does not count the 5% of annual cases where fertilization occurs but implantation is prevented as part of the contraceptive’s effectiveness, then one can at most claim 95% effectiveness for the contraceptive. And that effectiveness would put the contraceptive significantly behind the most effective formulations of the pill. In fact, it will put it somewhat behind the results that can be achieved by Natural Family Planning by a well-prepared and well-motivated couple. So for commercial purposes, the manufacturer will have to be advertising 99% effectiveness. But one cannot with moral consistency claim 99% effectiveness while holding that 5% of that is an unfortunate side-effect. By claiming 99% effectiveness, one is putting oneself behind the mechanisms that one knows are being used to achieve that effectiveness.

Suppose that a manufacturer advertises an analgesic that is guaranteed to be 99% effective at pain relief. But suppose that 5% of the time, the analgesic kills the patient and 94% of the time it relieves pain non-fatally. Then indeed the analgesic relieves pain 99% of the time, since killing the patient stops the pain. But by holding out 99% effectiveness, the manufacturer is showing that that it is really intending this to be a pain-relief-cum-euthanasia drug rather than a mere pain-relief drug.

What about 2? As we saw from the case of the manufacturer, the user cannot intend 99% effectiveness while saying that the deaths of early embryos are unfortunate side-effects. But the user, unlike the manufacturer, can say: “From my point of view, this is about 94% effective, with a 5% likelihood of a fatal side-effect, which side-effect I don’t intend.”

There are two points I want to make here. First, Double Effect requires there to be no reasonable alternatives to the course of action. But there are methods of fertility control that do not cause implantation-failure, for instance Natural Family Planning, and some of these methods are not less effective when compared against the 94% figure. And one cannot with moral consistency compare these method against the 99% effectivness figure while holding out that 5% of that is an unfortunate side-effect one would like to avoid.

Finally, imagine a hypothetical male contraceptive pill that works by releasing genetically engineered sperm-eating viruses that has the following annual properties:

  • Fewer than 1% of female partners get pregnant.

  • But 5% of female partners get a fatal viral infection from it.

  • No men die.

Clearly, nobody would tolerate such a product. Both the manufacturer and the men using it would be accused of murder. Technically, it might not be murder if the deaths of the women were not intended, but the act would be closely akin to vehicular homicide through criminal negligence. Any Double Effect justification would have no hope of succeeding, because Double Effect requires that the unintended bads not be disproportionate to the intended goods. But a 5% annual chance of death is just not worth the contraceptive effect, especially when there are alternatives present. Indeed, even if the only alternative to using this nasty contraceptive were abstinence, which isn’t the case, surely total abstinence would typically be preferable to inducing a 5% annual chance of death (unless perhaps the woman were already suffering from a terminal disease).

Of course, my arguments are predicated on the assumption that killing an early embryo is morally on par with killing an adult. That's another argument.