Faith, Knowledge, Belief, and Stochastic Theory Part 2: Inductive Reasoning

jmb275apologetics, apostasy, faith, Logic, Mormon, Mormons, testimony 12 Comments

Deductive reasoning is a form of reasoning in which the conclusion can be drawn directly from the premises. The idea is to show that the conclusion necessarily follows from the premises. For example:

Bridges built using sound engineering principles are safe. The Bay Bridge was built using sound engineering principles. Therefore, the Bay Bridge was safe (at least when it was built).

This form of reasoning is reliable, at least as far as logic goes, producing correct conclusions from the premises.

In contrast, inductive reasoning is a form of reasoning in which the premises indicate some degree of support for the conclusion but which do not completely support it. For example:

All bridges I have walked across have not fallen. Therefore all bridges are safe to walk across.

This form of reasoning is unreliable in producing valid results. It is a logical fallacy. There are many forms that inductive reasoning can take; generalization, proof by analogy, causal inference, prediction, etc. These forms of inductive reasoning have different strengths and can provide a form of “information” that is most useful albeit simultaneously by themselves unreliable.

There are people at both ends of the spectrum regarding inductive reasoning. Some people are all too willing to throw out the premises since no conclusion can be reliably drawn. These people ignore the “information” contained in the premises. In “information theory” (a branch of stochastic theory), “entropy” (an uncertainty measure) is used to quantify how much “information” exists in a premise. EVERYTHING has some amount of “information” even if it is very little. On the other end of the spectrum are people all too eager to rely on inductive reasoning supposing they have made a fantastic argument all while ignoring the holes in their logic. These people seem to assign WAY too much “information” to the premises.

Of the possible inductive reasoning techniques, Bayesian inference has become the most influential and reliable. In fact, Bayesian inference continues to be extremely important and reliable particularly in fields of science and engineering in which a reliable conclusion is produced in the presence of noise and modeling error.

In my previous post I discussed why I feel faith is not well understood in Mormonism and why I think some knowledge of Bayesian probability, coupled with Bayesian inference does a good job of explaining what faith, knowledge, and belief are and how we can apply it in our lives. In this post I will elaborate on Bayesian inference as a form of inductive reasoning and try to show how I believe it influences our faith, beliefs, and knowledge.

Bayesian Inference

Bayes’ rule relates one conditional probability to its inverse through a prior and marginal probability (don’t worry this will become clear in a sec). The formula is:
[math]P(A|B)=\frac{P(B|A)P(A)}{P(B)}[/math] A conditional probability (i.e. [math]P(A|B)[/math]) is the probability of some event A given that B occurs. This reads “probability of A given B.” Explaining the formula:

  • Let [math]A[/math] represent a new hypothesis
  • Let [math]B[/math] represent a new piece of evidence
  • [math]P(A|B)[/math] is the posterior probability (i.e. the probability we are interested in) and is the probability of our hypothesis given our new evidence
  • [math]P(B|A)[/math] is called the likelihood and is the inverse of what we actually want. This is the probability of our evidence given our hypothesis
  • [math]P(A)[/math] is the prior (i.e. what we believe before we start)
  • [math]P(B)[/math] is the marginal probability and represents the probability of witnessing the evidence under all possible hypotheses

There is also a form of Bayes’ rule that works for PDFs and/or distributions as well. It is a bit more difficult to follow but the idea is the same.

A Simple Example

A simple example from Wikipedia will help.

To illustrate, suppose there are two full bowls of cookies. Bowl #1 has 10 chocolate chip and 30 plain cookies, while bowl #2 has 20 of each. Our friend Fred picks a bowl at random, and then picks a cookie at random. We may assume there is no reason to believe Fred treats one bowl differently from another, likewise for the cookies. The cookie turns out to be a plain one. How probable is it that Fred picked it out of bowl #1?

Intuitively, it seems clear that the answer should be more than a half, since there are more plain cookies in bowl #1. The precise answer is given by Bayes’ theorem. Let [math]H_1[/math] correspond to bowl #1, and [math]H_2[/math] to bowl #2. It is given that the bowls are identical from Fred’s point of view, thus [math]P(H_1)=P(H_2)[/math], and the two must add up to 1, so both are equal to 0.5. The event [math]E[/math] is the observation of a plain cookie. From the contents of the bowls, we know that [math]P(E|H_1)=30/40=0.75[/math] and [math]P(E|H_2)=20/40=0.5[/math]. Bayes’ formula then yields
[math]P(H_1|E)=\frac{P(E|H_1)P(H_1)}{P(E|H_1)P(H_1)+P(E|H_2)P(H_2)}[/math] [math]P(H_1|E)=\frac{0.75\times0.5}{0.75\times0.5+0.5\times0.5}[/math] [math]P(H_1|E)=0.6[/math] Before we observed the cookie, the probability we assigned for Fred having chosen bowl #1 was the prior probability, [math]P(H_1)[/math], which was 0.5. After observing the cookie, we must revise the probability to [math]P(H_1|E)[/math], which is 0.6.

The most important part of this example is to note that there is information (in a stochastic sense) in the evidence that we observed that a plain cookie was drawn. Bayesian inference gives us the tools necessary to characterize our belief about the bowl from which Fred picked the cookie.

A Practical Example

Let’s walk through a more practical and intuitive example to illustrate how this might relate to faith, knowledge, and belief.

Suppose Mary grows up in San Francisco, regularly traveling over the numerous bridges connecting the peninsula to the mainland. She has traveled over these bridges numerous times. She believes that these bridges were constructed using sound engineering principles. She also believes that standards were in place to help guide the engineers in making good decisions. She believes that steel is very strong, and that the materials used met some arbitrary specification for stress standards and strength. She also believes there are some engineers who regularly inspect the bridge for weaknesses and problems and would alert her if necessary.

These are all fairly reasonable assumptions in our modern society, and we might easily say that Mary “knows” that if she goes across the Golden Gate today that the bridge will not collapse. In this regard it likely takes little “faith” for her to go across the bridge. She doesn’t have to take a “mighty leap” as it were. She doesn’t grow spiritually by exercising this faith/knowledge to cross the bridge. Nevertheless, it is clear to everyone (I hope) that Mary, in fact, does not “know” that the bridge is safe. There is no way she can know. All she could say is that she knows that the last time she crossed the bridge it did not collapse.

In terms of my last post, we might say that Mary’s confidence distribution has a mean of “the bridge is safe” with a very very small standard deviation.

Now, let us suppose that one day Mary goes across the Bay Bridge (which is and probably will be forever under construction) and part of the bridge collapses. Fortunately, Mary is on the part of the bridge that remains safe. But she witnesses the tragedy, including the loss of many lives.

The question is, what information is contained in this observation and how should it effect Mary’s confidence/knowledge/belief/faith in her frequent bridge crossing? The conclusion is very difficult. If we use inductive reasoning we might say:

All bridges built using sound engineering principles will not collapse. The Bay Bridge collapsed. Therefore, all bridges are not safe.

Although this conclusion feels like a real stretch, if we place ourselves in Mary’s shoes it might seem like a reasonable conclusion given the fear associated with witnessing the collapse of a bridge. From Mary’s perspective, from a Bayesian point of view, the evidence she saw was so overwhelming, and she inappropriately placed such great weight on it, that the mean and standard deviation of her confidence distribution shifted wildly. At this point, her mean has likely shifted to “all bridges are not safe” with a very small standard deviation. Now, it does take a “mighty leap” of faith for Mary to cross the bridge, and she may grow spiritually/emotionally by taking that leap.

Of course to a third party concerned relative this conclusion is completely unreasonable. We can poke holes in her reasoning all day long. Just because the Bay Bridge collapsed doesn’t mean another will. Just because the Bay Bridge collapsed doesn’t mean engineering principles are invalid. We don’t even know the cause of the collapse. Just because the Bay Bridge collapsed doesn’t mean the system of inspecting bridges is broken. The list could go on and on. From our perspective, we might say that Mary’s confidence distribution shouldn’t change at all! But that would ignore the information contained in the observation and/or assign far too little weight to it.

The right answer is to acknowledge the information contained in the observation and assign the appropriate weight to it. Obviously this is a completely arbitrary and subjective exercise. Who is to say what the right weight is? Who is to say what the appropriate measure of information is? Bayesian inference gives us the tools to analyze the problem but it does nothing to help us know how to characterize the evidence and assign appropriate weight to our evidence.

The Application to Faith in Mormonism

Since this post is already too long, I’ll only weakly apply this to faith and save a more in depth analysis for my next post (though I think if you give it some thought the connections are readily apparent).

Bayesian inference can be a valuable tool for helping us understand how to apply evidence into our confidence distribution for a specific hypothesis. “Faith,” “knowledge,” “belief,” etc. are measures of confidence from which we conclude that we will perform various actions. The real question regarding our “faith” is what weight we apply to various kinds of evidence. How it effects our confidence distribution is very simply described using Bayes’ rule.

For most members of the church, spiritual manifestations are a critical piece of evidence that validate their beliefs. They therefore place high weights on those pieces of evidence, giving them a mean of “the LDS church is true” with a very low standard deviation. For others, spiritual manifestations may be too wrapped up in psychology, emotions, etc. to be reliable. Hence they place low weight on such evidence and although they may have the same mean, they may have a larger standard deviation on their confidence. Those who experience disaffection may throw the “baby out with the bathwater” and dismiss the experience altogether, eventually allowing their mean to shift to “the LDS church is NOT true” with a low standard deviation.

Humans have a very good Bayesian inference mechanism built right into their intelligence. We can perform Bayes’ rule calculations internally with very little effort and often very appropriately draw good conclusions amidst a plethora of evidence particularly when we have no psychological attachment to the outcome. But when we do have psychological attachment, it becomes VERY VERY difficult to not allow the internal Bayesian inference mechanism to become biased.

Comments 12

  1. Post
    Author
  2. I love Bayesian statistics and it is used by cosmologists daily. All of the constraints on the Age of the universe, amount of matter/dark matter/dark energy, the expansion rate, etc… all come from Bayesian analysis.

    Your right, the best part about Bayesian statistics is gathering new information correctly weights the likelihood that something is true. Thus you can say, as we cosmologists say, after analyzing this data we are 95% confident the age of the universe is 13.7+/- 0.4 Billion years old. Thus, we know what the most likely age is *and* say exactly how much confidence we have in the result.

    If you are looking to quantify how much confidence you can have in something, and would like the confidence to be weight appropriately by all the available data, Bayesian statistics is definitely a great tool to be using.

    Great post.

  3. My only critique of using Bayesian statistics judge how much confidence you have in something is: It may not be practical for day to day things.

    For example, yesterday people kept asking me how confident I was the Lakers would win. I said extremely confident, although, the practicality of me sitting down and calculating mathematical likelihoods and doing some Bayesian inference to give the questioners a “proper” answer is probably a little unpractical.

    But for important things, the things that need to be correct, Bayesian statistics will be great.

  4. jmb275:

    Great post, again. I think it would be more correct (particularly in regard to Joseph Smidt’s point about every day use) that we have something built into our brain that mimics a Bayesian calculation. But these “neural network” processes depend on positive chemical (i.e., emotional) feedbacks, so I’m not sure how we distinguish situations of religious belief/faith/knowledge. It seems the latter ARE the kind of situations where our subconscious bias IS the Bayesian calculation.

    I also think that its not a simple linear scale, because the hypothesis set in the denominator can be far more complex.

    For example, I have a high mean, modest standard deviation for the hypothesis that the Restoration is “true”, but I have a very low mean, modest standard deviation for the hypothesis that the LDS is the “true” manifestation of that “true” Restoration, and so on.

    I can reverse engineer the particular weightings of evidence that give rise to those different conclusions in my case, and that’s a strength of your model, but it’s hard to know how many hypotheses significantly contribute to the denominator.

  5. I agree with FireTag and Joseph Smidt. I like your analogy of giving confidence to faith processes, but I think it is still hard to apply in practice. (As a Frequentist, I feel I really need to understand Bayesian Inferences better.)

  6. Pingback: Reductivist Accounts of Faith | Al Wisdom

  7. Thanks for the comments. Let me try to address some of them:
    Re #3 Joseph:
    Yes, in my previous post someone commented on the nature of science as being only about falsification. Although this is certainly a large part of it, abductive reasoning is also a large part of it. Rarely (never?) does science say “x is true,” rather we say “we believe x to be highly probable.”

    Re #4 Joseph:
    I agree. I think we have a Bayesian inference calculator built in. That is to say, in an inverse problem sense, humans are able to quickly assimilate new information and apply it to the proper confidence distribution to determine the likelihood of something. For example: suppose I show you a picture of an apple. You can readily identify it as an apple. What if I change the color scheme to B&W can you still determine it to be an apple? Probably. How about sepia? Yep. The point is, humans are VERY good at making decisions based on probabilities and what we observe in the world around us. Rarely do we KNOW something to be true, but rather we consider it highly probable based on a prior, a marginal, and a likelihood.

    Re #5 FireTag
    Ah, you’ve hit the hard part. Yes, for me this is where I have a hard time. As I mentioned above, humans are excellent at assimilating new information and applying it to our prior distribution. What we are REALLY bad at is doing this amongst emotional experience. That is to say, humans are VERY BAD at objectively analyzing anything when they have some emotional attachment. For me, this is why I tend to doubt my spiritual experiences as an indicator of objective truth. Rather, I accept my spiritual experiences as a subjective reality, as part of my perception. It is indeed very difficult to know how many hypotheses contribute to the marginal distribution. I recently read a debate between a prominent protestant and Bart Erhman. He attempted to actually use Bayesian inference to demonstrate why Jesus resurrection was probable. The key was in the marginal. He tried to indicate that the marginal was very small (hence making the overall probability high). He completely missed the point that the marginal must take into account the probability of the evidence amongst ALL HYPOTHESES.

    Re #6 MH
    I think I may not be explaining it quite right. I’m not saying this is something we should try to apply in practice. I’m claiming that we do this already subconsciously. In reality, what I’m saying is that we all act in accordance with what we deem most probable. For Mormons this means we go to church and are obedient because we have high confidence that this is the “right” thing for us to do. To do otherwise would be the definition of irrational. Is this faith, or knowledge, or what? I don’t really know and I don’t think it matters. The point is, to an orthodox Mormon paying tithing is a manifestation of their faith. For someone in disaffection they may not pay their tithing because they see no “blessings” from it. Are they faithless? My argument is no they are not. Paying tithing is an arbitrary manifestation of faith. But acting according to one’s confidence is a manifestation of their faith, or knowledge, or whatever.

    Hopefully, next post when I put all this together it will make more sense.

  8. Re #4 Joseph
    After thinking about this, I see what you’re getting at. Let me try to respond. You mentioned the Lakers example. I suspect you might do a lot more thinking about it, and possibly even do some calculations if you had $1 million riding on the outcome. That is to say, when you have a lot riding on the wager you will make a bigger deal of it and you will act according to your confidence model. If you now apply this to religion, what do you get? You don’t get our your calculator and assign some numbers trying to figure out which church is true. But you do have a running prior distribution in your head that has been shaped over the years which tells you that the LDS church is your “mean.” What happens when you encounter new information that either supports or weakens that distribution? How much “evidence” would it require before you conceded that the church isn’t true? Is there any amount? What do you do with new information about religion? Do you dismiss it (that is, simply not apply it to your prior at all)? Or do you find ways to interpret it that confirm what you already believe (this is normal human tendency)?

    My point is, I’m not saying we should be using this in our lives with real probabilities. I’m suggesting that we all have a running prior in our head (let’s just say with regard to religion for now) that determines what we believe. When we encounter new information, if we are not emotionally attached to it, our internal Bayesian inference calculator applies the new information to the prior very well. But when we have emotional attachment, we cannot objectively apply the new information and the result will always be biased toward our prior (confirmation bias).

  9. Re Firetag
    (Sorry, the more I think about all the comments the more I have to say)

    I also think that its not a simple linear scale, because the hypothesis set in the denominator can be far more complex.

    Indeed. Additionally, note that although I am using Gaussian terminology, that is likely not the case. That is to say, the distribution is likely in reality a random process and probably not Gaussian. It is likely non-linear. This does NOT negate the correctness of Bayesian inference, though if we were try to solve it with real calculations it would be nigh impossible. Nevertheless, humans (IMHO) are more than capable of solving nonlinear problems internally with little to no effort.

  10. OK. This may make everyone’s eyes glaze over, but the linearity I was referencing was faith < belief < knowledge. Clearly religious concepts can branch so that we have faith/belief/knowledge about different things. The inference mechanisms lead to mental and spiritual evolution, not simply toward greater or lesser belief.

  11. The probability of H conditional on E is defined as PE(H) = P(H & E)/P(E), provided that both terms of this ratio exist and P(E) > 0.

    H=number of bridges crossed, E=bridges ollapsed.
    Bayes-wise, the chance of a bridge collapse after Mary sees a bridge collapse is P( ((total number of times Mary has driven across bridges that haven’t collapsed) + (total number of times Mary has witnessed a bridge collapse))/total number of times mary has witnessed a bridge collapse

    H=many, E = 1, we have the equation [many + 1]1.

    That is, the single bridge collapse doesn’t change the Bayesean probability any significant amount. Mary may ferl less safe, but she is as safe as she ever was.

Leave a Reply to jmb275 Cancel reply

Your email address will not be published. Required fields are marked *