One of the most basic and fundamental premises of Mormonism is the idea of free will. While we take this for granted in Mormonism, in the secular world the debate is far from settled. In fact, the debate over determinism vs. libertarianism (not the political philosophy but the metaphysical philosophy) has raged on and on for centuries.
Determinism states that every event is causally determined by previous events. This further implies that if we knew all the events (causes) we could actually predict exactly what a particular agent would do. I think this is where most people have heartache with determinism (i.e. they don’t like to feel controlled, even if only by nature itself). However, I don’t think most people would deny that there is indeed an element of determinism in our behavior. There are enough commonalities between most people that we can quite accurately predict how a person will act in a particular situation (within limitations of course).
On the other hand, Libertarianism states that agents have free will. Libertarians (again not the political philosophy) assert that free will is logically incompatible with determinism, making the two mutually exclusive. The defining factor for libertarians is that an individual is able to take more than one possible course of action in any given scenario.
Artificial Intelligence and Humans
More recently, as our technology advances, debates over the capability and morality ofartificial intelligence (AI) have become more frequent. We see it in movies, books, and even in scientific journals. The issue explored in movies like iRobot is that AI eventually becomes so advanced it develops free will on its own. This seems to imply superiority of free will over determinism (i.e. the “robot” evolved to a higher state of intelligence making in on par with humans). And indeed, while I have met some who were convinced determinists, most people I know are at least compatibilists (those who assert free will and determinism are not logically incompatible and hence accept both positions), if not libertarians.
Rather than spur a fruitless debate over whether or not humans are deterministic or have free will, I want to discuss why one would be, or is, better than the other? Why, in Mormonism (and in humanity generally it seems to me) do we assume that free will is better than being deterministic? If we are deterministic beings, are we less interesting, or somehow not as good as we would be if we had free will? Is there some benefit that a free will agent has over a deterministic one?
An Aerospace Example
Consider the following example. In my research lab we are interested in autonomous uninhabited aircraft. Though we (as a society) have had autonomous airplanes for a while now, they’re not truly autonomous. There is always a human somewhere in the loop, whether at a computer screen, watching the aircraft, etc. The question is, could we design a fully autonomous (i.e. from launch to landing, including the proper handling of all unanticipated problems) aircraft that would perform as well (or better) than an expert pilot?
Before answering too quickly, consider:
- The laws of physics are well known and generally not subject to negotiation.
- Assume I can build a computer as intelligent as a human.
- Assume the computer can sense, or has access to ALL the information that a human pilot would.
If you believe the answer is no, why? Is there something intrinsically better about free will that makes such an agent better at flying an airplane than a deterministic one? Is having free will a more “enlightened” state than being deterministic? Why?
If you believe the answer is yes, do you also believe that humans are deterministic? Is it only a matter of time before we discover ALL the causal influences that impact human behavior and will thus have the ability to perfectly predict it?
Now, suppose I could actually demonstrate such an autonomous aircraft to you in an arbitrarily large number of flights. Would you concede there is nothing inherently better about free will, or would you hold out that there is always one untested case in which the expert pilot would do better?
How do your beliefs influence your answer?
I’m destined to believe in free will. I have no choice.
For a less smart-ass answer, I think that both apply non-absolutely. Each of us acts with finite abilities to make free will choices, but, when aggregated, those choices can often fit under bell-like curves, whose shapes may well be predictable much of the time. Even at an individual level, our choices tend to fall into patterns which can become predictable. Exceptions will happen too, and old patterns and habits can be replaced by new ones, and the reasons for those changes can be more and less predictable.
As an avid sci-fi fan, I have to side with free will over determinism. No matter how good the technology is, good old intuition always wins out in sci-fi! Although that may be a Hollywood conceit. However, I do believe that a mix of both are best. Pure determinism isn’t that appealing for the reasons given above, but having a pull toward certain behaviors yet the ability through strong effort to swim against the tide of our tendendies is something that would lead to personal growth.
Geddy Lee of Rush put it best…”You can choose a ready guide in some celestial voice…if you choose not to decide, you still have made a choice! …I will choose a proper sphere, I will choose free will!”
My experience is that the Church talks a good game about “Free Agency” but as a practical manner (supervision of missionaries comes to mind) it tends to be controlling for fear that its members will choose…poorly. Maybe we should be allowed to drink from the wrong cup and crumble to dust once in awhile as an example! Else, also, why the so-called “Strengthening the Members” committee? If the nutty thoughts of various members are that much of a danger, then the Church is in a poor position indeed!
Free will, or freedom, is paradoxically connected with knowledge and discipline. Think of it as like the fictional “Force” of Star Wars. By its nature it can only be used by those willing to discipline themselves. Once having mastered oneself, true power comes forth…unless, of course, you choose the quick and easy path, as Vader did.
From a scientific viewpoint, it’s my observation that AI is a farce. A computer (robot) may have the ability to process information much faster than a human, but it still performs within the limit of its programming. Else why C3PO pontificates about it being against his programming to impersonate a deity! Indeed, how would one program in the ability for a computer to decide for itself what it should do, and let’s be glad that perhaps we can’t…ever heard of SkyNet? Or hence why Asimov’s three laws of robotics.
To try and answer a question from the middle of the article…
I think the answer is because Mormons (and humanity in general) want to preserve a sense of moral culpability or responsibility. If things are totally deterministic, then how can we say that we are responsible for our actions. “We” are at the mercy of various laws, etc.,
(However, I think that a pure view of free will strips out moral responsibility too. The reason we can think about these things in the first place is because we know actions have consequences…and these consequences are deterministic.)
I think free will is particularly privileged in Mormonism because of its elevating status for us humans. It allows us to be closer to the divine, whereas in other theologies (*cough* Calvinism) humans seem to be at the mercy of God, who will determine who will be saved and who will not.
…nevertheless, there is one aspect of Calvinist thought relating to determinism that I find somewhat sensical. We don’t act in a completely will neutral manner. Rather, our actions are influenced, determined, and decided by our nature, our personality, our motivations, etc., (If not, then it would be very difficult to speak about the character of people, but we clearly can talk about people’s habits, emotional reactions, etc., etc.,). The Calvinist simply emphasizes the fallen nature of humanity, and argues that we will never *from our nature* choose God. (e.g., the Calvinist god looks like a monster to me. That’s entirely the point, from my supposed “sin nature.”) It makes sense to me that I need a drastic change of nature (that *I* can’t just flick on or off *myself*) to change my beliefs, opinions, and gut emotional reactions to a great many things.
Ah yes, this LFW vs. determinism subject is an old NCT standby. See dozens of posts and thousands of comments in the debate over this subject here and here.
I hold that Mormonism fails if some variation libertarian free will is not true. (The rub for most Mormons is that libertarian free will is not compatible with foreknowledge and they want both.)
The biggest problem for determinists is there are no moral agents in a fully deterministic universe. In such a reality no one is really morally accountable for anything. Rather all are cogs in a great causal chain. So with regard to your question about the plane — it depends on if you want a moral agent somewhere in the loop or not. If not then a fully autonomous plane could be hunky dory. If so you would need a human in the loop. (Of course I am an LFW guy so I assume humans are moral agents).
FYI — My last comment appears to be stuck in moderation.
I see it too, Geoff! Unfortunately, I don’t have admin privileges, hehe
There is also the problem of evil. If we are not free, then God is responsible for moral evil.
Also, would your program land a plane in the Hudson river?
Free will is a complicated matter when we consider that some with cognitive and mental health challenges lack the ability to choose freely. Surely the Savior’s admonition not to judge unrighteous judgment applies to all who condemn those who lack the capacity to choose well.
Going back to the plane, given the power of checklists, checklist driven decisions are often better statistically than “free will” ones.
#12 — “Aircraft one performs a split S? That’s the last thing you should do.”
>>”I hold that Mormonism fails if some variation libertarian free will is not true.”
I disagree. See here: http://chriscarrollsmith.blogspot.com/2009/05/can-one-be-mormon-and-compatibilist.html.
>>”The biggest problem for determinists is there are no moral agents in a fully deterministic universe. In such a reality no one is really morally accountable for anything. Rather all are cogs in a great causal chain.”
I disagree again. In fact, I would argue that it is only to the extent that my present actions and state of mind proceed causally from my past state of mind that they can meaningfully be said to have been “willed” at all.
(Some libertarians have attempted to construct logical proofs of your argument, but they always end up sneaking the conclusion into the premises.)
>>”There is also the problem of evil. If we are not free, then God is responsible for moral evil.”
Any omnipotent creator-God is responsible for diseases and natural disasters regardless of human freedom, so I don’t see why it should seem out of character for such a God to be implicated in the creation of moral evil. Note, however, that determinism does not entail the omnipotence or creative act of God. If God is less than omnipotent or less than the creator of the universe, he may simply be unable to prevent deterministic natural and moral evils from occurring.
>>”From a scientific viewpoint, it’s my observation that AI is a farce. A computer (robot) may have the ability to process information much faster than a human, but it still performs within the limit of its programming.”
The problem is in the hardware, not the software. Computers are made of fundamentally non-adaptive materials, such as metals and plastic. Organic materials, by contrast, are highly adaptive, permeable, and chemically reactive. As such, the human brain is more capable of learning and adaptation than computer hardware. Attempts have been made to create software that simulates the functioning of neurons, but the processing power required is too enormous to be feasible. I do believe that AI will achieve consciousness someday, but it may have to wait until we learn to engineer organic computer hardware.
My observation on the fee will vs. determinism debate is that artsy, right-brained people tend to opt for free will, whereas scientifically-minded left-brained people tend to opt for determinism. Perhaps our views are “determined” by the way we are wired. 🙂 Left-brainers are used to thoughts following one another very logically and causally, one after the other, like dominoes. Right-brainers are used to having flashes of insight and inspiration that seem to come out of nowhere and to have no logical cause.
Quite honestly, I think this is, in fact, Hollywood feeding our ego. One problem with the way I framed the post is that I made it abstract enough to allow the readers to determine the metric. As you’ve so eloquently pointed out, for a human life, I tend to agree that a mix of both is best. But I’m not certain that there is something intrinsically better about free will, yet I think the vast majority of us think that there is.
Well, you’ve made the assumption that humans do NOT perform within the limit of their programming. Perhaps we do. This is the very heart of the question.
Re Andrew S
Good points. As I stated for Hawkgrrrl, I failed to state the metric by which one would qualify the “goodness” of either free-will or determinism. I agree with what you’ve written as it relates to the life of a human. But what about more focused tasks like flying an airplane, driving a car, etc.? Are these tasks radically different than living a life, or just a subset of it? Could I build a robot that lived a “life” and we would we then feel compelled to grant it “free-will” so that it might live a good life?
Re Geoff J
Yes, indeed, a worn out topic, but I had hoped for the twist of pitting the two against each other rather than trying to determine which characterizes humans. I agree with this statement you’ve made.
I agree. My question is why would I want or not want a moral agent in the loop? My AI airplane can still make decisions, and 99% of the time they will be the optimal one (a much better percentage than any moral agent). Does the moral agent give me some added bonus, perhaps redundancy? If so, would a ground pilot that is also a robot be a good redundancy measure? I think this really drives to the heart of the matter – what does having a moral agent in the loop buy me?
I’m not quite sure we would want to do that (e.g., give things like cars and airplanes “lives” and then grant them free will so they might have a good life.) Because they don’t have things like free will or consciousness or whatever, we can do a lot more with them. (E.g., if you are “abusive” to your car, then that might cost you money to fix it, but you’re not going to go to jail. Your car, if it doesn’t work, will not work because something is physically wrong with it, not simply because it has chosen not to cooperate.)
Think about animals as a kind of intermediate step. It might be nicer if our pets were “freer,” but we’d rather have our work animals be less free.
Re Eric Nielson
No, actually, an autonomous airplane would have flown back to the airport and landed there safely, saving millions of dollars. My lab has studied this incident closely and we can show that the aircraft could have made it back had the pilot known it was possible and taken any number of appropriate maneuvers to that end. And even if it weren’t possible, an AI airplane would potentially have the capability of knowing that landing in the Hudson was a possibility and chosen it appropriately. As a sidenote I’m not faulting that pilot at all, he is absolutely a hero. He was not aware that he could have made it back to the airport. But AI system would have known that.
Not only that, but the airplane would use Bayesian inference of some nature, so it would probabilistically account for all the information and make an optimal decision. That decision would almost always be as good or better than any human pilot.
Well, yes, to an extent. A pilot, human or not, has to deal with the aircraft which is “hardware.” The hardware controlling the AI system is several orders of magnitude more reliable than the aircraft components themselves. I see what you’re saying, but at least for something like an aircraft I don’t think that’s the issue at all. In fact, I would argue that an AI system (if we only consider the computer hardware itself) is likely far less prone to hardware failure than any human brain.
I think we are making progress on this front. We had a seminar last semester in which a research group at a university is creating a Bat-like flying machine. The wings are made of a material that responds chemically to an electric current and can do so at a rapid enough rate to simulate the flapping wing of a bat. It was very cool!
Re Andrew S
I think I can agree with that. I probably wasn’t very clear in my comment. Your first comment relates to living a human life and you are arguing for or against (mostly for, I think, though I’m not sure you actually stated your own position on the issue) free-will in that context. But what makes it seem like free-will is better for a human life? Is there something magical about it? Or is the reasoning all wrapped up in religious and/or existential arguments? What if the metric is more defined, like, say, landing an airplane safely? Does the question of free-will superiority change then? I think it does. But landing an airplane safely is a specific task in life, perhaps some subset of it where we can more aptly define the “goodness” metric. If we could define such a metric on a human life, would free-will still be better, intrinsically, in the sense that it would lead to a more optimal outcome?
For me, I see three main reasons why we seem to grumble at a deterministic view of ourselves:
1. We don’t like the idea of not having control over our actions. In other words, we just don’t like the idea.
2. The idea of deterministic humans opens up so many other ethical questions for science, gov’t, religion, etc. that could be VERY scary.
3. Our religion tells us otherwise and/or some other religious/existential reasoning.
Here’s my take on my own questions:
When I first started debating this with my advisor, I was VERY against the idea. I insisted there was wisdom in having a moral agent in the loop. But the more I think about it the less able I am to articulate why that is. In fact, at least for an airplane, I can’t really see any reason to have a moral agent in the loop at all given that the AI system would have access to the same information and was equally “intelligent.” Religion and ethics aside, I don’t see that a free-will agent is any better at a task than a deterministic one, and yet, I do agree that the optimal situation for a human life is a compatibilist approach.
I think that the reasoning probably would be wrapped up in existential kinds of arguments, ultimately.
I think free will gives us the idea of our superiority and uniqueness…or, in Mormonism, something that can be related to divinity. I don’t think, then, that the metric appropriate to the topic is something like, say, “landing an airplane safely.” But maybe a metric like — hypothetically — “reaching the next stage of development” or even “becoming gods and goddesses,” then I think that free will is necessary here AND markedly more effective in achieving that goal. But that’s a religious or existential argument.
Re Andrew S
Yes, this is all I can come up with as well, so I agree.
So let me ask another question if you’re willing. As an atheist, is free-will necessary and markedly more effective. Is there a “next stage of development” for which free-will is necessary? Or is it just a psychological mechanism that makes us feel unique and special? After all, we do tend to view ourselves through rose-colored lenses. Would you feel comfortable riding in an aircraft knowing there was absolutely no human in the loop? I think most people would not, but I’m wondering if the atheist perspective gives you a different view.
I think the *perception* of free will is effective. When people don’t *perceive* free will, they often get to doing weird things. (This is my position on a few things…meaning, purpose, etc., Regardless of whether things have them or not, perceiving meaning and perceiving purpose is helpful to us. The alternative isn’t often very positive.) So, I guess that’s answering in the “psychological mechanism” vein of things?
I guess I would have to know what you meant by “absolutely no human in the loop,” but I’m guessing that I wouldn’t have a problem riding in an aircraft knowing there was absolutely no human in the loop. Aircraft is a tool. I don’t think that has anything to do with atheism.
This is a parody right? Ideas of free will and of determinism are contrived, incomplete and inadequate ways of attempting to describe human experience / behavior. Its like Kant spending all that time trying to figure out “human nature” in Religion at the Limits of Reason Alone. O.K. that might have seemed like a good approach at the time but would anyone address the issue of good and evil in that way now? I sure hope not!
So I’m going to say that there is no meaning in choosing libertarianism or determinism as some how superior because its a poor question and a bad choice; if for no other reason than “choosing” one will always be haunted by the possibility of the other. That’s a structural feature of this kind of philosophical argument.
In LDS culture what I do find interesting is that determinism goes by the name of free will in some contexts.
I don’t see the controversy in whether or not free will is *better* than determinism. It seems obvious that we can create machines that do certain tasks better than humans. There is no reason to suppose we can’t do this for an aircraft. Heck, we might have drones in flight right now which are better at performing their missions than people.
I don’t think having a moral agent buys anyone anything over everything being deterministic, but it certainly affects how I assess the moral implications of the things going on around me. It seems obvious that we assess the circumstance and moral implications of a drone who kills innocents differently than if it were a person. We may be wrong to do that, but again, that just means it is our understanding of morality that hangs in the balance. I don’t give two wits if a computer is better than a human at every single activity you can imagine.
>>”The hardware controlling the AI system is several orders of magnitude more reliable than the aircraft components themselves.”
Sure. I was just responding to the poster who felt AI was fraudulent because it is insufficiently human-like. The construction of more “human” intelligences will probably require different materials. For the construction of tools, however, the materials we’re using are just fine.
>>”What if the metric is more defined, like, say, landing an airplane safely? Does the question of free-will superiority change then?”
Task-efficiency is only meaningful in the context of some kind of goal. However competent your AI may be at task performance, it is not capable of goal-setting and goal-appreciation the way that humans are. Now, the “value” that we place upon goals and goal-setting is completely subjective, so it is not as though human consciousness has some kind of self-evident objective superiority to your AI. But until your AI is capable of disagreeing as to what is important in life, our subjective valuations are all there is. If the AI existed independently of humanity, there would be no thought or discussion as to the “metric” by which intelligences are to be judged, and there would be no “meaning” or “value” attached to the AI’s task-competency.
What we’re talking about here, though, is self-awareness vs. non-self-awareness, not free will vs. determinism.
Re Andrew S
Sorry for the vernacular. “In the loop” is a phrase from the control engineering world. The “loop” is the feedback loop and in a fully automated system the computer controls all aspects of the system. If there is a human “in the loop” then the system, somewhere in the decision making path, responds to human input as well as computer controlled input.
Re Jacob J
We most definitely do. I’ve watched them fly.
Interesting. I see what you’re saying. But it seems like there’s a risk assessment that goes on as well (when it comes to our own life in the balance). It’s easy to see we have no problem with a computer controlling the gas/air mixture in our car, but we might have a problem if the car controlled everything and there were no human operator and no way for a human to influence the car’s decisions. If the computer only controls the gas/air mixture this could (under the right circumstances) cause a lethal accident just like the latter scenario. Yet the latter scenario seems more risky. Perhaps this is nothing more than a lack of demonstration that such vehicles are safe. Once cars have been driving themselves around for 5 years that risky feeling will go away.
Great comment. I think what you’ve said is important and I’m trying to understand it better.
How do we decide what is important in life? Is this a deterministic process in and of itself? Given the right set of genes, and the right environment could we predict what an agent will value?
I think I can kind of see this, except that it has to be tied to determinism because we are, in fact, talking about deterministic systems becoming “self-aware” or becoming conscious. Perhaps then the discussion is really over whether or not deterministic systems can be conscious? Currently this is an impossible question since we really don’t even know what consciousness is. Free-will may be a necessary, but insufficient condition for consciousness, I dunno.
The discussion of computer driven airplanes is like the one once had over (mechanical) computer driven elevators.
Stephen M, that’s actually a really good point.
Especially in relating the existence or nonexistence of moral responsibility with determinism.
If it is morally impermissible to operate certain machinery (like an elevator) on the Sabbath, then an easy solution that cannot be immoral* because there is no moral agent is the Shabbat Elevator.
At least, until someone says it is forbidden anyway.
“I think that both apply non-absolutely.”
This is impossible. Libertarianism is a form of indeterminism which is simply the denial of determinism. Determinism holds that every event has some cause, some reason for why it rather than something else happened. If you believe that *some* events aren’t this way, then you are an indeterminist.
Accordingly you can choose one of the following:
1) You can embrace determinism and try to show how meaningful choices can happen in a universe where all events happen for some reason.
2) You can embrace indeterminism and try to show how meaningful choices can happen in a universe where some events happen for no reason at all.
Quantum physics mucks things up a bit, because it seems to indicate that there is some indeterministic arbitrariness at the sub-atomic level. (I’m a little weird, because I believe in the many-worlds interpretation of quantum physics. In my view, there are multiple parallel deterministic realities, in some of which a decisionmaker will choose different things on account of the different quantum states.)
However, probably most causation happens above this quantum level, so I do think you could make accurate predictions most of the time if you had complete information about the system and unlimited processing power.
>>”Perhaps then the discussion is really over whether or not deterministic systems can be conscious? Currently this is an impossible question since we really don’t even know what consciousness is.”
Yes, I think those are the questions we should be focusing on. As for consciousness, it can only be whatever we decide it is– at least, until some other kind of “conscious” entity comes along and begs to differ. 🙂 I think it has to be more than the ability to choose. The ability to choose is meaningless without the ability to both know you are choosing and care about the outcomes. Besides intelligence, emotion and a conception of self are probably the core components.
29 — Only if I’m playing in the definitions you’re using. If you have any difficulty understanding my points, keeping in mind that I’m not talking about determinism or libertarianism as you’ve laid them out (one hint is that I didn’t use either term), then feel free to ask any questions that will make my meaning more clear.
And don’t tell me what I can and can’t do. We don’t have that kind of relationship.
Continuing with Blain said, the fact that the concept of “compatibilism” even exists shows that there can be a mixing of free will and determinism here.
However, I would agree with Jeff G that *libertarianism* is incompatibilist. But that’s because it is defined (not just by Jeff G, but by the “philosophical community” [I don’t know if that’s persuasive to Blain…]) as a kind of free will that is indeterminist.
32 — I will bow to the collective wisdom of the greater philosophical community as to the merits of their technical definitions. I reserve the right to make fun of them in their woolly-headedness, though. My comments were based on a more general notions of “free will” and “fate,” I think.
Finite beings have finite capacity to be free. But we also all have the freedom to make choices that can’t be taken away, even if it can be coerced or manipulated. Philosophers can play with that as they wish, but they can’t make it no longer true.
I agree that it’s more of a technicality. Certainly, free will and determinism can be mixed, as long as it’s free will that’s not “libertarian” and the determinism isn’t “hard”.
And, speaking of those philosophers, via a survey of them (see http://philpapers.org/surveys/results.pl ), you can see that compatibilism is usually accepted or “leaned” by a plurality, if not majority of those surveyed.
OK, the plane landing in the Hudson is a great example, and honestly, yes, for all the kudos we gave to Capt Sully, that’s only because most pilots would have crapped themselves and killed everyone on board and themselves in a panic. But machines that are built and programmed flawlessly and comprehensively will always be better than human intuition in such situations.
This made me think of the Three Mile Island accident, another case where the technology was ready to do the right thing – the failsafes were in place to prevent meltdown – but the operator panicked and turned off the failsafes, feeling that human intuition was more trustworthy. And so there was a radiation leak. Machines 1, Humans 0.
“But machines that are built and programmed flawlessly and comprehensively…”
Argh. Joke FAIL. I was trying to start the phrase “flawlessly and comprehensively” with an italic tag, and close with a bold tag, to suggest that getting things programmed flawlessly and comprehensively may not always work.
Which I guess I proved, but in a way that makes me look dumber than it was supposed to. Please don’t let me anywhere near the software for the fully-automated planes.
(#35,#36) – Results woulda been different if Captain Kirk had been in charge! He regularly outwitted computers and got them to short out (I guess in the 23rd century they’ve forgotten about circuit breakers).
In truth, only so many situations can be programmed. Yes, Capt. Sully COULD have taken a chance and either proceed to Newark Liberty OR back to LaGuardia…but the odds of dropping the Airbus into a crowded metro area was HIGH. He did as his training told him and ditched the plane in the Hudson because his highest priority was to save the lives aboard his plane, and he chose the surest method. If Cmdr. “Data” had been piloting, even with his lightning fast reflexes (no USB or wireless I/F in the 24th century to skip using electro-mechanical controls and switches), he’d likely have made the same decision. When we see a machine that shows bonafide decision-make processes, rather than what’s hardwired into it, then we’ll have true AI.
I’m not a strict determinist, because there is randomness at the quantum level. But that randomness doesn’t afford us, so I think free will nothing but a convenient illusion.
Like Chris, I believe the many-worlds interpretation of quantum mechanics is correct, but merely assuming space is infinite and nearly flat gets you to the situation where the question of whether “everything happens for a reason” should be seen in the context of “everything happens, period”.
So the discussion of free will versus determinism, IMO, needs to be recast in that light. For example, many worlds theorist David Deutsch has pointed out in his book “The Fabric of Reality” that the concept of time as something that “flows” may itself have to be given up and replaced with a frozen “block time” on physical grounds alone, since other times are duplicated “now” in other parallel universes or regions of space.
He has also pointed out has that the only meaningful definition of freedom then is the ability to realize courses of action that are fully consistent with my own nature and desires. But this is precisely the aspect (and the only aspect) of freedom that is preserved in the multiverse. None of the individual copies or variants of me are ever forced by the existence of other realities to choose anything they perceive as against their own nature or desires. The absence of freedom, and the futile nature of attempting to be anything but what we were “intended” to be, is a phenomenon solely observable by something with a “simultaneous” view of the entire multiverse.
The behavior of the “superposition” (ensemble) of us is knowable from the “outside”, but internally, we can never know which portion of the ensemble we are.
Hey FireTag, I think that’s a very well-formulated summary of basically what I believe, as well.
Can’t be, Christopher. I’m the only earthling in this universe that’s this crazy. 😀
Well, I’m happy to prove you wrong. 🙂
As long as we’re on the subject of block time and many worlds, you might be interested in taking a look at this.