Before delving into the following argument, it will be helpful to clarify the relevant notion of *probability* at play. First, probabilities are understood as reasonable degrees of expectation (i.e. epistemic probability). In a probability analysis which employs epistemic probabilities, we are concerned not so much with the way the world actually is apart from our justified acceptances, i.e. how likely things are by their nature; rather, we are concerned with the degree to which we *expect* an outcome *given our justified acceptances*.

To see this, imagine tossing a fair coin. Our rational degree of expectation ought to be 0.5 for the coin coming up heads. We will give heads a 0.5 chance *even if* the world is deterministic (i.e. even if the coin was determined or necessitated to come up heads by the nature of the toss and the laws of physics). The assignment of probability concerns our justified acceptances, and because we do not know the infinitude of causal factors at play in the coin toss, our justified acceptances on most occasions lead to a rational degree of expectation of 0.5.

Second, the argument concerns *conditional probabilities*. We are considering how likely something *would be* given a hypothetical situation. These are part of ordinary reasoning. Say the coin persistently comes up heads 50 times. We can consider how likely this outcome would be given the hypothesis that the coin is fair. On the fair coin hypothesis, the P(H) = 0.5 for each toss. So, P(50 heads|fair coin (i.e. chance) hypothesis) = (0.5)^50 . On the other hand, on the hypothesis that the coin is heads on both sides, say, the coin would be expected to come up heads persistently (for it cannot fail to come up heads!). The point of Bayesian conditionalization is that, given our evidence of 50 heads in a row, *we do not actually have to observe* that the coin is heads on both sides in order to be highly confident in this hypothesis’ truth. The observation of persistence in coming up heads is *itself* sufficient to provide strong evidence against the brute, unexplained, chance persistence of heads.

With the aforementioned clarifications about probability and conditionalization out of the way, we can turn now to a fascinating argument (leveled by Pruss, 2019) for the conclusion that your mind is not the only mind in existence. In what follows, I will both explicate and expand upon Pruss’ argument.

In the following argument, “humans” will refer to physically (including neurally) healthy mature humans. Now, suppose there are *n* humans. Let *Q*1, …, *Q**n* be their non-mental qualitative profiles: complete descriptions of their non-mental life in qualitative terms. Let *H**i* be the hypothesis that everything with profile *Q**i* is conscious. For instance, if we consider human #1, *Q*1 is a complete description of this human’s non-mental life (behavior, physiological state, third-person properties, etc) in qualitative terms. H1 is, then, the hypothesis that anything with human #1’s non-mental profile has conscious, qualitative, subjective inner experience — a mental life.

Now consider the following three hypotheses:

M: All humans have a mental life.

N: Exactly one human has a mental life.

Z: No human has a mental life.

Assume further that our background information includes the fact that there are at least two humans.

Given this background information, the hypotheses are clearly mutually exclusive. It cannot be the case that, say, all of the ⩾2 humans have mental lives (hypothesis M), while at the same time one and only one human has a mental life (hypothesis N) — for that would entail that all humans and not all humans have mental lives, which is clearly absurd. Similar reasoning reveals that, given the background information, the three hypotheses in question are mutually exclusive.

Now, add that there are n humans on earth, where n is in the billions. Add further that they have profiles Q1, …, Qn, all of which are different. What’s a reasonable thing to think now?

Well, N is no more likely (in terms of prior probability) than M or Z. For one thing, both Z and M are categorically uniform in that they treat *similar non-mental profiles* alike with respect to *possessing mental life* (thereby maintaining a continuity or uniformity across a category of similar things). This categorical uniformity adds to their simplicity and therefore intrinsic probability, whereas N “builds in” a break in uniformity/continuity and increases the complexity of the hypothesis by means of an arbitrarily limited number of minded Qi profiles (viz. 1 as opposed to 2 or 3 or 365). For another thing, the principle of indifference (when applied to prior probabilities) dictates that we assign equal prior probabilities to hypotheses when (a) relevant evidence is absent that could serve to raise the probability of one as opposed to another, and (b) there are no internal, hypothesis-specific reasons (e.g. coherence, simplicity) to favor one hypothesis over another. Since we are considering prior probabilities, we are (per the nature of the situation) not taking into account certain evidential considerations. So, condition (a) is met for this case. Moreover, we have seen that internal, hypothesis-specific reasons actually count *against* N and *in favor of* M and Z, as N is less simple than M and Z. Even on the supposition that such internal, hypothesis-specific reasons do *not* count against N, we must nevertheless assign a prior probability of ⅓ to N per the principle of indifference. Hence, N has a maximum prior probability of ⅓ (although, again, we have very good reason to hold it is less intrinsically probable than this).

To re-cap: Although we have reason to hold that N is less intrinsically probable than M and Z, we can (for the sake of maximal conservatism) suppose that all three hypotheses are equally likely. Hence, all have a maximum prior probability of ⅓. Furthermore, if N is true, exactly one Hi is true. Moreover, all the Hi are just about equally on par given N (per the principle of indifference). Because of this, the probability that Hi is true given that N is true is 1/n. This is because there are n humans, and on the supposition that N is true, only one of them is minded — and since all such humans are equally on par in their prior probability of being that one lucky minded individual (per the principle of indifference), it follows that the probability that any given human having a mind (Hi) on the supposition of N is 1/n. To formalize this a bit:From the above, we can deduce that P(Hi&N) is at most about 1/(3n). To see this deduction, it is helpful to see that the following equation can serve as a definition of conditional probability:

To see why this equation successfully captures conditional probability, it is helpful to examine a concrete illustration. Suppose we have a class of 60 students. Suppose that 35 students had chicken for lunch, while 45 students had steak for lunch. Now, it clearly cannot be the case that each student only ate one type of meat for lunch, for that would entail there being 80 students (contrary to our supposition that there are 60 students). In fact, it must be the case that 20 students students had both chicken and steak, 15 students only had chicken, and 25 students only had steak. The following Venn Diagram represents this situation visually:

With these statistics, we can ask: What is the probability that a student ate steak? To find this, simply divide the number of students who ate steak (45) by the total number of students (60) to obtain 0.75 or 75%.

We can also ask: What is the probability that a student ate chicken *given that the student ate steak*? Instead of dividing by the total number of students, our reference class actually becomes the total number of students who ate steak. Because of this, we just take the number of students who *ate* *chicken in addition to eating steak* and divide it by our reference class (the total number of steak-eating students). For this, we obtain 20/45, or about .44 or 44%. More carefully, what we actually performed in this calculation was the probability that a student ate both chicken and steak divided by the probability that the student was in our reference class of steak eaters. In other words, we performed the following calculation:

This becomes 20/45, as the 60’s in the denominators cancel out, which in turn equals 0.44 or 44% (as before). This simple illustration should make it reasonably clear as to why the conditional probability equation is as it is.

From all of the above considerations, it can be straightforwardly deduced that P(Hi&N) is at most about 1/(3n). Because P(A&B)=P(A|B)P(B), it follows that P(Hi&N)=P(Hi|N)P(N). From earlier considerations, we deduced that P(Hi|N) is 1/n and that P(N) is ⅓. So, P(Hi&N)=(1/n)x(1/3)=1/(3n). Therefore, P(Hi&N) is 1/(3n).

On the other hand, however, P(Hi|Z)=0 and P(Hi|M)=1. This is because Z rules out any Qi being minded, whereas M entails that all Qi are minded.

But now suppose I learn that Qm is *my* profile. I then learn that Hm is true. Clearly, this rules out the all-zombie hypothesis Z, and it also rules out most of the Hi&N conjunctions (in fact, it rules out all of them except for one, namely Hm&N). From our three hypotheses on the table, then, the only two that are compatible with the new data (Qm) are the following two mutually exclusive hypotheses: (1) Hm&N and (2) M.

Crucially, though, my posterior probability (after learning Qm) for Hm&N is now approximately *at most* 1/(n + 1). Why is that so?

The trick is to employ the following fact: If A and B each entail E, then the ratio of P(A) to P(B) is the same as the ratio of P(A|E) to P(B|E). This is because A and A&E are logically equivalent when A entails E. But that means that P(A) must be the same as P(A&E). Now, recall from earlier that:

By the same token, moreover, P(B|E)=P(B)/P(E), since we supposed that B likewise entails E. So, using these facts, we get:

Now let’s return our attention to the crucial claim, namely that my posterior probability for Hm&N is approximately *at most* 1/(n + 1). Given the previous assumed background information, Hm&N entails Qm and M entails Qm. And now we have a situation identical to the situation above concerning A and B each entailing E. So, the ratio of the posterior P(Hm&N|Qm) to the posterior P(M|Qm) is identical to the ratio of P(Hm&N) to P(M) (i.e. the ratio of their priors).

But, as we deduced earlier, P(Hm&N) is 1/(3n) whereas P(M) is ⅓, and the ratio can therefore be represented as follows:

Therefore, the ratio of the posterior of Hm&N to the posterior of M is 1:n.

But as we saw earlier, Hm&N and M are mutually exclusive. Their posterior probabilities, therefore, can *at most* equal 1. But when we have a ratio between x and y of the form x:y, and x and y add up to equal 1 (i.e. they are mutually exclusive and exhaustive), the following two equations must be true:

This may seem a bit abstract at first, so it will be helpful to concretize the situation. Suppose we have yet another class of students. Suppose further that the ratio of males to females in the class is 5:4. To simplify, suppose the class only contains 9 students in total. What is the probability that a student chosen at random is male? Clearly, it is 5/9. Similarly, the probability that a student chosen at random is female is 4/9. In these cases, we have a ratio of 5:4 (x:y), and the probability of a random member of the class being from one group (the males) is 5/(5+4) (in other words, x/(x+y)), whereas the probability of a random member of the class being from another group (the females) is 4/(5+4) (in other words, y/(x+y)). From this concrete example, we can see with relative intuitive ease that, when there is a ratio of x:y, and when x and y cannot both be true, *at most* P(x)=x/(x+y) and *at most* P(y)=y/(x+y).

Hence, because the ratio of the posterior probabilities of Hm&N to M is 1:n, and because the two hypotheses in question cannot both be true, it follows that *at most* the posterior probabilities must equal 1/(n+1) for Hm&N and n/(n+1) for M. Therefore, the posterior probability of Hm&N must be less than or equal to 1/(n+1). And since n is in the order of billions, n is at least 1 billion, in which case the posterior probability of Hm&N is less than or equal to 1/1,000,000,001. Thus, the probability that my human mind is the only human mind has a probability of less than or equal to 0.000000001. Therefore, the probability that there is another human mind is greater than or equal to 0.999999999. The probability that there is at least one other mind is therefore *at least* 0.999999999.

The result is monumentally significant: We are virtually certain that there exists at least one mind apart from our own mind. This, I aver, goes a substantial way towards resolving the problem of other minds.

How might we proceed from here? In particular, how might we reasonably infer that all humans have a mental life from the fact that we are virtually certain that at least one other such human has a mind (a mind apart from our own, that is)?

Here is one sketch. Note first that it would seem to be wholly arbitrary and inexplicable if there were just, say, two minds in existence associated with physiologically functioning human qualitative profiles, but billions of other non-mined humans with similar physiological profiles. We could thus run an explicability argument, perhaps. This arbitrariness/explicability worry, moreover, is magnified if we accept a *principle of relevant differences* according to which, roughly, if x and y only differ in respects that are irrelevant with regard to possessing some further property P (or having some further fact F true of them), then it is inexplicable (else: metaphysically impossible) for one of x or y to have P (or F) but the other to lack P (or F). From this, perhaps we could argue that the only differences that obtain between the Qi profiles seem quite clearly to be irrelevant with respect to having Hi be true of them. For instance, surely the mere *height* of something makes no difference with respect to that individual’s possessing a mind — similarly for age, skin color, hair color, eye color, slightly different behavioral tendencies, and so on. But such irrelevant differences seem to be the only differences one could point to in our scenario concerning minded versus purportedly non-minded humans.

Second, surely the feature “having a mind” is an essential feature of a thing. A mind (or at least a disposition towards having or developing a mental life, as in the case of human fetuses, people in a coma, people in deep sleep, and so on) seems to be the sort of thing that is built into the very nature of a thing which has it and is not just some contingent accident that something happens to possess. But if that is the case, then if at least two humans have minds, it follows that all humans (*qua human*) must have minds. Admittedly, this pushes the problem back a step insofar as we now face the problem of justifying why all the other seemingly human individuals with profiles Q1 through Qn *do*, in fact, share a common nature with the two humans we have established as minded (as opposed to, say, having the nature of human*, where the nature of human* consists in sharing nearly identical qualitative non-minded profiles with humans but essentially lacking minds). Nevertheless, absent any defeaters to the contrary, surely we are warranted in defeasibly inferring that two things share a nature in common when they are nearly identical in terms of all of their non-mental properties (like physical constitution, evolutionary history/origin, characteristic behavioral tendencies, nearly identical causal powers, and so on). From this, we could perhaps mount a defeasible argument for all humans’ having minds.

Perhaps we could also motivate the inference to all humans’ being minded by appeal to a principle of modal uniformity. In particular, we may hold that because (a) humans are categorically alike, and (b) the only differences that obtain between humans are degreed differences, we are justified in inferring that such degreed differences do not make a difference with respect to modal properties like the contingency of humans, their possessing minds (which is a modal property provided being minded is an essential feature of a thing), and so on.

Finally, perhaps we could give two epistemological solutions to build off of Pruss’ argument. The first makes use of a thesis of phenomenal conservatism while the second makes use of externalism. Roughly, the thesis of phenomenal conservatism I have in mind is as follows, where S is a subject and P is a proposition:

T: If it seems to S that P is true, then S has prima facie defeasible evidence for P’s being true.

So, perhaps we could argue that if it seems to one that other humans are minded, then absent any defeaters, one has evidence in favor of other humans’ being minded. And perhaps this is bolstered by Pruss’ argument, given that we know it is not only possibly true, but also actually true that there exists at least one human apart from ourselves that is minded.

Finally, one may adduce externalism as an extension of Pruss’ argument. Externalism is the thesis that, roughly, factors external to the knower confer justification to the knower’s beliefs. The nature of justification, in other words, is not completely determined by internal factors alone. One form of external justification is standing in an appropriate causal connection to the fact known. Crucially, though, suppose it is in fact the case that all humans are minded. Then, presumably, *their being minded* is what causes their seemingly intelligent and intentional behavior. And their seemingly intelligent and intentional behavior is, in turn, precisely what causes or induces us to believe that they are minded. So, there seems to be an appropriate causal connection between the *fact* that humans are minded and our *belief* that humans are minded (on the supposition that humans are in fact minded). It seems, then, that if humans are *in fact* minded, then our belief that they are minded counts as knowledge under externalist accounts of knowledge (provided that other conditions are met of course, like reliably functioning cognitive faculties, etc). If this is true, then it seems that whether or not we know humans are minded depends primarily on the *actual fact of the matter*, in which case arguments attempting to demonstrate that we do not know other humans have minds must *establish that it is false that other humans have minds* — not merely that we have no specific reason to rule out certain hypothetical, remote epistemic possibilities of (say) philosophical zombies. Combining this with Pruss’ argument, we can see that no such skeptical argument could demonstrate the more general thesis that *no other human has a mind*, since (per Pruss’ argument) we are virtually certain that this is false. The skeptic of other minds must therefore argue for a more restricted and limited thesis, while avoiding his reasons for the restricted and limited thesis equally applying to the known-to-be-false thesis that *no other human has a mind*. This certainly seems to place the skeptic on the back foot. Moreover, Pruss’ argument shows that it is certainly *possible* to stand in such an appropriate causal connection with another mind (given our virtual certainty that another such mind actually exists). Thus, there is no barrier *in principle* to attaining such knowledge, contrary to what many proponents of skeptical arguments claim.

Perhaps *you* have some ideas about where to proceed next. Wonderful! I would love for you to share them with me via the comment box below, my email, or my two Instagram accounts.

Author: Joe

Email: NaturalisticallyInclined@gmail.com

Instagram (@atheoslibertatem): https://www.instagram.com/atheoslibertatem/

Instagram (@professional_rationalism): https://www.instagram.com/professional_rationalism/

You write, “Given the previous assumed background information, Hm&N entails Qm and M entails Qm.” I wonder how this is so. Assuming what you mean is that the conjunction of the background information with Hm&N or with M entails Qm, and further assuming this background information includes Qm itself, of course that’s true. But from that I don’t think we can make the inferences about probabilities that you and Pruss make. After all, don’t those inferences rely on the claim that Hm&N and M both individually and all by themselves entail Qm, such that Hm&N is logically equivalent to Hm&N&Qm, and M logically equivalent to M&Qm? (I thought the idea was that A by itself entails E, so that A, not A + background knowledge, is logically equivalent to A&E.) On the other hand, if we toss one of my two assumptions about what you meant, then there’s no apparent entailment. That anything with profile Qm is minded and that exactly one human is minded doesn’t entail (datum Qm, which is) that you have profile Qm. Further, that all humans are minded doesn’t entail that you have profile Qm.

Thank you for the comment mate! I’m steeped in mid-terms as of right now. I hope to respond sometime this week. But we shall see! 😉