Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Does superdeterminism save quantum mechanics? (backreaction.blogspot.com)
77 points by nsoonhui on Dec 19, 2021 | hide | past | favorite | 123 comments


From watching the video and reading one of the papers, I get the sense that the theory being described doesn't work in the usual way. For example, reading the paper it talks about "future inputs" determining what happens, which seems impossible if you need to have the entire state specified for each time slice without need to reference other time slices. Instead of having a specific state for time t, and being able to evolve it forward/backward to other times, it is instead applying constraints across spacetime?

In principle, allowing yourself that kind of flexibility could be enough to model quantum mechanics. But I'm not sure why I'd want to do it that way instead of the existing ways. It's not particularly philosophically appealing to me, and objectively speaking it looks like they don't yet have a concrete mathematical model that actually reproduces quantum mechanics. But maybe I'm just ignorant of the latter part's existence.


You might want to read about "block time" or "block universe". In this theory, everything is already pre-computed, like an uncompressed video file. So you can skip to whatever frame you want since there is no state.

> It is sometimes referred to as the "block time" or "block universe" theory due to its description of space-time as an unchanging four-dimensional "block", as opposed to the view of the world as a three-dimensional space modulated by the passage of time.

https://en.wikipedia.org/wiki/Eternalism_(philosophy_of_time...


People who meditate often begin to slowly realize that ideas arise without concious thought. We imagine we are "thinking" but there really is no break in causality where we somehow choose how we want the future to unfold. Our brains are no more exempt from cause and effect than a marble rolling downhill is.

It is a mistake to confuse the finger pointing at the moon with the moon itself. We don't preceive everything, our senses are a filter that allows us to navigate but there's no more free will here than a plant turning to follow the sun. Fortunately, we don't experience reality this way. If we did, life would cease to be meaningful.


While limited, studies using FMRI brain scans can detect "decisions" being made before they're consciously processed.

https://journals.plos.org/plosone/article?id=10.1371/journal...


> there's no more free will here than a plant turning to follow the sun

That's a bold claim without presenting anything really backing it up?


I'm no expert in this field, but aren't the existing assumptions around time being independent at the slice we want to take a major reason why QM and General Relativity can't be reconciled? If so, that would seem like a good reason to look at alternative ideas in this space.


I think that's more or less correct. GR suggests "time" isn't independent, we instead have 4-dimensional "spacetime". The idea that future inputs can influence past configurations can be perfectly sensible in this context. It's like a restricted form of consistent closed timelike curves.

There was some development in this direction years ago:

The Logic of Quantum Mechanics Derived from Classical General Relativity, https://arxiv.org/abs/quant-ph/9706018


Why is GR required? Already in standard QFT (Minkowski metric; flat spacetime; special relativity) the propagator 'violates causality'. Although that phrasing is misleading; it doesn't mean there's any logical inconsistency.

'The future affects the past' you might say. Yes, well, all information about the future is derivable from information about the present. So actually those 'future influences' are determined by the present. In the same way, 'locality' is almost a moot concept, because the analyticity of fields implies that the value of fields at remote locations is determined entirely by the field locally.


I don't follow the comment about analyticity. I thought fields were real analytic, which meant one needed to use a power series to expand them, which has a potentially infinite amount of information in the Taylor coefficients? So I don't see how the local data "effectively" captures non local information.


Maybe GR isn't required. It seems like the simplest path to understanding how this could work for those who aren't familiar with QM though.


I think you may be over-simplifying. We have relativistic quantum-mechanical theories, and we can formulate quantum field theories on curved background spacetimes.


But isn't the spacetime of SR flat?


I got a bit lost at the double slit experiment with superdeterminism. Can someone clarify what is supposed to be going on?

* We send one photon at a time through a double slit, don't measure them, and an interference pattern is formed, presumably because the probabilities of the particle going through each slit interfere with each other.

* If we measure the position of the particle, it goes through only one slit, and there's no interference pattern created.

* Standard QM says that the probability wave function representing the position of the particle "collapses" when measured and the particle then goes through only one slit with no interference when we measure it.

* Superdeterminism says that the initial state of the universe determined that we would measure the particle, and therefore it goes through one slit and there's no interference.

* What I don't understand is: why would there ever be interference in that case? If it's not probabilities interfering with each other what causes single photon interference in the double slit experiment in superdeterminism?


Well, the thing is, neither the standard (non-local) quantum mechanics nor the superdeterminism answer this question, as they both simply talk about a wave function and its taking a different shape depending on what measurement is made; the only difference is that one completely gives up on futher clarification of what the wave function physically represents while the other encourages looking for hidden variables.


I've heard the interference discussed as an "interference of probability amplitudes".

But in superdeterminism, it seems as if there are no probabilities in that sense, or am I missing something?


Superdeterminism does not deny probabilities (just like, say, classical mechanics does not deny statistical mechanics).


Not following.

If everything is predetermined, there are no actual probabilities except as an artifact lack of knowledge of what the outcome must be.

If that is the case, how can the probability amplitudes interfere with each other?


Even as an artifact of a lack of knowledge, probabilities can do many things - correlate, interfere...


I don't see how one's lack of knowledge can cause lines to form differently (different interference patterns).

If anyone has any more insight into this, please do weigh in.


It's a good question. The interference pattern is a result of two things:

1) Quantum mechanics: the pilot wave or whatever you want to call it.

2) Incoming particles' positions having a broad statistical distribution.

With a cunningly skewed / rigged distribution you wouldn't see fringes (e.g. suppose they all come in on the same trajectory). However such distributions are atypical.*

But we talk about probability being a 'measure of ignorance' here because we don't know where these particles are exactly, but we know they must have gone through the initial slit, and QM says: in that case, here's the probability it will end up at this final destination.

* This the same story as statistical mechanics. (Having said that we do have the issue there that our systems start in low entropy, or 'atypical', states. That's a whole other topic)


Right, in classical QM.

But in superdeterminism unless I'm really misunderstanding, they are basically saying that there are no probabilities at all because everything is predictable given the initial conditions of the universe, if you knew the hidden variables.

Any "probabilities" are just what one observes to happen, but they aren't doing anything any more, they are just the result of incomplete knowledge. If you knew the hidden variables, you would be able to predict everything exactly and there would be no probabilities as such.

In that case, what is creating the interference pattern and why?


Nit: the single slit case still gives you an interference pattern. It's just that it's a single slit pattern and not a double slit one


> Superdeterminism says that the initial state of the universe determined that we would measure the particle, and therefore it goes through one slit and there's no interference.

Where did you get this from? I don't think Sabine implied that. From what I understood, it is more like the measurement interacted with the particle and it changed its behavior. I don't see why we need to consider that it was always meant to be that way.


It seems that can't be true, because if you randomly decide after the particle is launched whether to measure it or not, the interference pattern still appears.

So it's not the measuring that does anything in superdeterminism apaprently, it's that if you do measure it, that was "known" in advance by the state of the universe.

If not that, what is supposed to be going on (physicists please respond).


> if you randomly decide after the particle is launched whether to measure it or not, the interference pattern still appears.

So, if you run an experiment and in 50% of the time the machine measures the particles and in 50% don't, does it still form an interference pattern in both cases? That does seem odd. I would expect that whenever a measurement is made, the interference pattern would disappear.


It forms a weaker interference pattern - imagine superimposing 50% of the interference pattern and 50% of the non-interfering look.

There's also the fun of the delayed-choice quantum eraser experiment, where if you measure (or not) which slit the photon went through after measuring the interference pattern (or not), you see (or don't see) the interference pattern.


Ok, that's definitely not what OP said. Of course the particle still can interfere with itself after the slit, it's just that when the measurement is made, the pattern of two waves interfering gets much weaker/non-existent. Also, the experiment you are alluding to does not say what you think it says. Perhaps you will be interested on another video from Sabine. https://www.youtube.com/watch?v=RQv5CVELG3U&ab_channel=Sabin...


I don't/can't watch videos. I'll happily look at something written, but frankly "the experiment you are alluding to does not say what you think it says" does not convince me that you know what you're talking about and are looking to find the truth. I have a master's in this stuff and I'm pretty sure what I said is a fair description of the experiment in question; if you think something is different about it then be specific.


You can read the transcription. Sean Carrol also wrote about it too https://www.preposterousuniverse.com/blog/2019/09/21/the-not...


I agree with the article you've linked (although the graphs are either mislabelled or being used to illustrate a different experiment from the one they're actually taken from; in the Kim et al setup R03 is not R01 + R02, it's a measurement from a different detector). How do you think it contradicts what I said?


determinism implies that afaik?


"If statistical independence is violated, this means that what a quantum particle does depends on what you measure."

But in standard QM without superdeterminism, what a particle does already depends what you (i.e. the experimenters) measure - how you set the relative angles of your detectors determines the amount of correlation, even when setting them at spacelike distances after the entangled pair is released (say Alice chooses hers and Bob his without any communication between them.

It's the same physical facts and we either take superdeterminism or quantum nonlocality (or other interpretations not at issue here). To me the conspiracy of superdeterminism is too great.


What you describe is "quantum contextuality". What Sabine is saying is that so-called superdeterminism/giving up statistical independence implies the contextuality that is needed to explain quantum mechanics using hidden variables, and it does so in a simple way if we accept future input dependence.


Bell nonlocality is not the exact same thing as contextuality from what I’ve been able to find.


I didn't even mention non-locality.


I guess I thought I was describing non locality


Yeah this seems like yet another attempt by someone to use different words to explain the exact same quantum theory we have always been using, then trying to claim because they used different words they somehow came up with a special new theory.


Uh, the OP frequently mentions "statistical independence", and I am uncomfortable with that: Independence is a property of a set of more than one random variable and is defined in probability, often used in statistics, but not defined in statistics. There is a polished, elegant, and thorough treatment of such independence, including of uncountably infinite sets of random variables, in J. Neveu, Mathematical Foundations of the Calculus of Probability.


In the context of Bell’s Theorem, statistical independence is understood to mean that, if extant, hidden variables are not correlated with how measurements are being performed.

Bell’s Theorem is only correct if this assumption holds. Hossenfelder is arguing that the assumption is incorrect: that Bell’s Theorem is incorrect precisely because there ARE hidden variables and that these ARE correlated with measurement settings.

She also argues that all mentions of free will in all related discussions are meaningless red herrings that have distracted physicists from properly interpreting Bell’s Theorem and observed violation of Bell’s Inequality.

Superdeterminism, arguably misnamed, simply argues that QM is deterministic, where Bell and others have argued it is not.


> Superdeterminism, arguably misnamed, simply argues that QM is deterministic, where Bell and others have argued it is not.

No, this is completely wrong. Super-determinism is more than determinism. Super-determinism is about conspiratorial coincidences – so the measurement settings you choose just happen to be the ones which will make it look like the world is quantum.

Other quantum interpretations are also deterministic, like many worlds and pilot wave theory. Bell's theorem doesn't assume determinism, but most importantly (1) locality (2) counterfactual definiteness (3) a single classical world (4) no retro-causality and (5) no super-determinism.


Many worlds is not deterministic in the sense that the experimenter cannot predict in which world they will end up.

Pilot wave theory is deterministic, but non-locality is retrocausal influence under GR.

I also don't think your take on superdeterminism is entirely correct. As Sabine says, "statistical independence" is the "no superdeterminism" assumption, and some superdeterministic theories can be conspiratorial, but that doesn't mean all such theories are conspiratorial. This is the mistake.


> Many worlds is not deterministic in the sense that the experimenter cannot predict in which world they will end up.

It is deterministic. The experimenter ends up in all of them.


I think these consequences are only conspirational if time is an independent variable? What if everything, the entire universe’s past, present, and future ‘happened’ at the same time, as a self consistent solution of some kind of underlying equation or set of rules? Our brain, the seat of our perception of time, is part of the same universe that we’re trying to explain. Maybe time is an artifact of the self referential feedback loop that causes our consciousness.


It's still conspiratorial. You just happened to choose the right particles to measure? It makes a mockery of the idea of trying to do science at all - the experiments you take are already determined, you can't learn anything about what causes have what effects because you can't ever change a causal variable.


Well, I don’t know of any physical law that allows you to choose anything. Whether or not you believe QM is normally deterministic or super deterministic, you have to believe in something outside physics to make all of us not a necessary outcome of an initial state. That doesn’t diminish the pleasures or the reality of consciousness though, or the (illusion) of free will, at least not for me.


I'm not particularly concerned about free will. But we have to have some notion of effects following from causes if we're to be able to reason about physics at all - otherwise the universe might as well be just a pile of stuff happening one after another with no rhyme or reason to it. Superdeterminism undermines that completely.


> It makes a mockery of the idea of trying to do science at all - the experiments you take are already determined

The laws of physics were there before people started studying physics. Didn't make it less interesting for those who were interested.

Everything we do is a mockery - life expectation is 75 if you are lucky and universe doesn't care about your achievements

>you can't learn anything about what causes have what effects because you can't ever change a causal variable.

[shrugs]If you don't have free will, then how can you change smth? What part of a computer "learns" during gradient descent?


> The laws of physics were there before people started studying physics. Didn't make it less interesting for those who were interested.

But they're interesting because they are laws - because there's some structure there, because the same causes consistently have the same effects. Superdeterminism denies all that.

> If you don't have free will, then how can you change smth?

I'd say that if you're an inherent part of the causal chain that makes something happen then it's fair to say that you changed it. You don't have to assume free will to acknowledge that we affect our environment.

> What part of a computer "learns" during gradient descent?

I don't know or particularly care, but the learning happens - you can't understand the behaviour of the system otherwise.


> But they're interesting because they are laws - because there's some structure there, because the same causes consistently have the same effects. Superdeterminism denies all that.

There can be structure without cause and effect. Take any static physics problem, for example a flexible sheet that’s stretched over some frame. Since there is no time dimension there is no cause and effect but at every point in the sheet there is structure (Poisson’s law).

Both QFT and GR are the same order in time and space dimensions. Now, time is special among the dimensions because of the second law of thermodynamics, but I still have this feeling that it’s our being part of the universe and our brains having sufficient complexity for self reference that causes the illusion of a linear flow of time, while in fact all time happened in one ‘instant’ and one of the requirements for our universe to exist is that it is self consistent (causing these weird conspirational coincidences).


> There can be structure without cause and effect. Take any static physics problem, for example a flexible sheet that’s stretched over some frame. Since there is no time dimension there is no cause and effect but at every point in the sheet there is structure (Poisson’s law).

Up to a point - that kind of structure still depends on a notion of spatial distance and "influence" where you can reason locally about subsets. It's fairly fundamental that some parts of the sheet are more closely related to each other than other parts of it. If you had a model where every point of the sheet was equally closely related to every other point, you wouldn't be able to have any nontrivial structure, I think.


> But they're interesting because they are laws - because there's some structure there, because the same causes consistently have the same effects. Superdeterminism denies all that.

Wait, how does it deny it?

> i'd say that if you're an inherent part of the causal chain that makes something happen then it's fair to say that you changed it. You don't have to assume free will to acknowledge that we affect our environment.

it's separating us from the environment and saying that one part changed the other, but that's just a simplification as our heads can't contain the whole system with all it's events. A blurry reflection. So now we "look" at this reflection and say : this is wrong, it makes no sense

Without simplification, we had some "initial" state and it "goes" somewhere. Like sand in a clock. We all know where the sand goes and what will happen in an hour.

We have words like "interesting, learn, understand" which don't really mean anything when we think about determinism. Science requires an observer but who is looking?


> Wait, how does it deny it?

To be able to talk about cause and effect you have to have a concept of independence. Superdeterminism's basic premise is essentially that everything in the past can affect everything in the future - you chose which direction to measure because the particle was polarised a particular way (or because of some common underlying cause). So how can you possibly talk about what caused you to measure a particular direction when apparently it was magically due to this thing that has no visible connection to it?

> We have words like "interesting, learn, understand" which don't really mean anything when we think about determinism. Science requires an observer but who is looking?

And yet science does work. We're able to make predictions and see them borne out.


> So how can you possibly talk about what caused you to measure a particular direction when apparently it was magically due to this thing that has no visible connection to it?

You mean a connection we can measure.

as i understand, our measurements rely on what we know and what we know relies on previous events. Its like what and how we measure something is simultaneosly determined by that something. We can't measure the size of a cat's soul because we don't have instruments for it but if we invent the instruments there will be no other choice than measuring them in a particular way the soul allows us to measure it.

Don't know if it make sense at all, but i tried to make sense of it.


Not necessarily that we can measure, but that we can reason about. We can't directly measure energy for example, but we know/assume that it obeys certain laws regarding conservation and locality.

If the cat's soul changed the usual laws of physics around the cat, it would destroy our ability to do physics when cats were nearby. It's not about whether we can measure it but about whether we can isolate its effects. If you can't do an experiment that doesn't depend on the state of the entire universe, you can't do an experiment.


At least for now, I'm willing to f'get about issues of "free will".

Thanks, I will keep trying to make sense out of Bell's work.

I keep getting stuck trying to read quantum mechanics: One place was the claim that the wave functions form a Hilbert space. Nope: As I read in W. Rudin, Real and Complex Analysis, a Hilbert space is a complete inner product space where complete means that every Cauchy convergent sequence is convergent. Well, while the wave functions are likely points in a suitable Hilbert space, they can't be complete, e.g., they can converge to a point in the space, i.e., a function, that is not continuous and, thus, not differentiable in contradiction to the assumption that all wave functions are differentiable. I admit that this is a small point, but I was trying to take quantum mechanics seriously and be careful.

Closer to the OP, another place I got stuck was in the approaches of physics to independent and uncorrelated: In probability theory those two are not the same: For two real valued random variables, independence implies uncorrelated. As in W. Feller, in the case of two real valued random variables with joint Gaussian probability density function, uncorrelated implies independence. Generally, however, uncorrelated does not imply independence. Independence is a much stronger property than uncorrelated. Maybe eventually I will figure out what physics means by "uncorrelated", especially for Bell's work.

To me, we can take the Ace of Hearts and the Ace of Spades, shuffle them, and deal them out, face down, one each to Bob and Sally. Bob can go a light year away. Sally then looks at her card and knows right away, nothing faster than the speed of light needed, what Bob's card is.

We know all the associated probability distributions for Bob and Sally. And we know that as soon as the cards are dealt what each of Bob and Sally have is determined -- so far unknown but still determined.

I'm guessing that this Bob-Sally thought experiment may have something to do with entanglement, the EPR (Einstein, Podolsky, Rosen) paradox, "spooky action at a distance", collapse of quantum mechanics wave functions, and Bell's results -- but I need to keep studying.


> To me, we can take the Ace of Hearts and the Ace of Spades, shuffle them, and deal them out, face down, one each to Bob and Sally. Bob can go a light year away. Sally then looks at her card and knows right away, nothing faster than the speed of light needed, what Bob's card is.

> We know all the associated probability distributions for Bob and Sally. And we know that as soon as the cards are dealt what each of Bob and Sally have is determined -- so far unknown but still determined.

Right, so that's a hidden variable theory. But the Bell Inequalities give you a set of measurements that can't be explained by hidden variables - one simpler variation is Conway's "free will theorem": Bob and Alice each have something like 29 envelopes, and if he opens one of several sets of 3, then he will always find two black cards and one white card (and Alice will always find the opposite). But there's no way to "statically" assign black and white cards to envelopes so that all of these sets of 3 have that property. So either which cards are in which envelopes really is undetermined until Bob opens a set of 3, or you somehow know which envelopes Bob is going to open at the point where you're putting the cards in the envelopes.


You might be interested in some code that illustrates the problem of hidden variables and the EPR paradox: https://pastebin.com/J4ZUhG8e. The issue is that we can't replicate what QM predicts (and experiments validate) using hidden variables without additional steps or assumptions. For example, in that code, there are a few possible ways we could still produce the QM correlation function with local hidden variables:

1. When we measure the first particle, we alter the second particle to produce the desired correlation. This is the communication loophole. If you move the particles far enough apart, it requires superluminal communication, which GR says is impossible. This is what Einstein was getting at in the EPR paper; if what the Copenhagen interpretation of QM says is true, and the particle is in both states at once, measuring it collapses it to a given state, and the other particle suddenly "knows" what it should be, even though the propagation of information through the universe has a speed limit.

2. We could define an additional hidden variable in [0,1], and do rejection sampling at the detector. Anything that is rejected is not detected, and we only deal with detected particles. This is the detection loophole. I'm not an expert, but this is the only one that really makes sense to me as a good candidate for a hidden variables theory, and I believe it's been ruled out by experiment.

3. We could bias the initial sampling of our hidden variables. The problem is, this requires that we know the setting of the detector ahead of time. As far as I understand it, this is essentially what superdeterminism comes down to: you know the experiment setting ahead of time because everything is completely deterministic, like a movie reel being rolled forwards. You can thus bias the initial sampling step to produce the QM correlation function. Aside from being unfalsifiable, it still leaves open the question of why this would happen. It essentially means that all QM experiments have predetermined outcomes, and for whatever reason, those outcomes are the outcomes we observe.

Ultimately, it seems to me that Bell's inequality is more a statement about the fundamental incompatibility of quantum probability with classical mechanics and probability. If QM is correct, and it seems that it is, then you have to give up certain assumptions such as locality.


> 3. We could bias the initial sampling of our hidden variables. The problem is, this requires that we know the setting of the detector ahead of time.

The standard computational approach to this is to define the distribution as lazily evaluated based on some future state. That's exactly what Sabine is suggesting for a superdeterministic theory.

> It essentially means that all QM experiments have predetermined outcomes, and for whatever reason, those outcomes are the outcomes we observe.

More or less. This doesn't seem to bother anyone when spin 1/2 particles are all governed by the Dirac equation "for whatever reason", but somehow people really seem to think it matters in this case.


I gotcha. It actually seems obvious and fairly reasonable when you put it that way, since it removes the "conspiratorial" element that theories of superdeterminism seem to invoke.


Her recent paper on this is worth a read: Rethinking Superdeterminism, https://www.frontiersin.org/articles/10.3389/fphy.2020.00139...


>To me, we can take the Ace of Hearts and the Ace of Spades, shuffle them, and deal them out, face down, one each to Bob and Sally. Bob can go a light year away. Sally then looks at her card and knows right away, nothing faster than the speed of light needed, what Bob's card is.

I don't get it. Alice knows it right away only if she knows the rules of the game, electricity in her brain doesn't need to move a light year away. Though, if she's braindamaged, it can take longer than a light year for her to figure that out or may never happen at all.


> Superdeterminism, arguably misnamed, simply argues that QM is deterministic

This is incorrect. QM is already deterministic. Where oh where do people get the idea that it is not?


The born rule for starters.


In QM, evolution is unitary. The Born Rule is an empirical observation that is adequately explained by the decoherence of a quantum system into entangled states of observer, environment, measurement apparatus and the system being measured.

E.g. the master equation for a quantum density matrix governs its evolution when the environmental conditions are not completely known or modelled. Classical ensembles or 'decohered' states occur as a consequence.

QM is formulated as a deterministic theory. When you try to model an open, interacting system without tracking its environment, you have to sacrifice determinism in that model, because you are failing to track all the information in the system. You settle for probabilistic quantities. Nothing quantum about it either-- same thing happens in classical mechanics. They call this "classical thermodynamics".


there might be some non-trivial deterministic unitary description via decoherence, but it should not be surprising that many people think that QM is non-deterministic.


I guess the whole field of statistical mechanics would make you uncomfortable.


What I have seen so far for statistical mechanics is uses of integral calculus notation that has nothing to do with anything in any version of calculus, advanced calculus, measure theory, the Riemann integral, Lebesgue integration, etc.


Well if you're doing fermionic thermal field theory sure (where the `integral' on Grassmann numbers is just some linear operation which satisfies certain properties). But apart from that I thought all the thermodynamic quantities were defined in terms of the partition function, which is a perfectly well-defined (Riemann) integral...


I can’t agree more with the OP about free will and stuff. The main problem of superdeternimism though is that it both explains everything and nothing. It’s like giving up.


It’s also an inconvenient idea. For instance, how can one be guilty of a crime if one had no choice in committing it? Why try at anything if you have no choice — everything that will happen was decided at the universe’s origin and is merely playing out. It’s nihilism to the extreme.

Then again, the quantum idea of true randomness isn’t much better — if human choice stems from a coin flip, can you really call it a conscious choice?

The scientific method and human free will rely on a possible fiction of choice that is neither 100% deterministic, nor 100% random.


> For instance, how can one be guilty of a crime if one had no choice in committing it?

Guilty is a social concept derived from the assumption that some events in the universe are "bad"

> Why try at anything if you have no choice — everything that will happen was decided at the universe’s origin and is merely playing out.

> It’s nihilism to the extreme.

Realising that there is no free will is not smth negative or positive. It's just smth which sets your further path which had no other purpose but spreading Life in the universe


> For instance, how can one be guilty of a crime if one had no choice in committing it?

Why do you think you had no choice in committing it? Are you saying these people don't have any reasons they think justified committing a crime? That seems clearly false in general. Certainly if someone was literally forced to commit a crime, ie. they didn't want to do it, then they are not guilty, but why wouldn't they be guilty if they wanted to commit the crime and then did so?

This is a common confusion in thinking about moral responsibility under determinism.

"Choice" is a process whereby a set of options is reduced to one. This process is typically driven by an individual's reasons for acting, ie. there existed a hypothetical set of possible actions Y, but I did action X from Y because of reason Z. Z counterfactually describes why X was selected from Y, and when Z is an internally held value/justification for acting, this is a person's "will".

When a person is "free" to use their will to make choices, then they are acting of their own free will. If they are instead coerced into choosing W from Y instead of Z, where W is defined by somebody else's values, then they are no longer free to act on their will. This is Compatibilism, where free will and moral responsibility are compatible with determinism.

I also think it's important to distinguish "free will" as used in ethics and "free will" as used in science, where they speak of the experimenter's freedom. They are simply not the same thing, because one is compatible with determinism where the other may not be.

> Why try at anything if you have no choice — everything that will happen was decided at the universe’s origin and is merely playing out.

What you're describing is fatalism, not nihilism or determinism. Given some tragedy is caused by a person, consider two scenarios:

1. they wanted that tragedy to happen

2. they fought tooth and nail to prevent it from happening

It's very clear to most people that the person in scenario #1 should be held responsible, where in #2 they should not. #2 is fatalism and absolves a person of moral responsibility, but this does not describe what happens under determinism. Under determinism, the people who make bad choices wanted to make those choices, and being held responsible is the moral feedback they need to correct their flawed decision process.


What is the origin of genuine choice?


> The main problem of superdeternimism though is that it both explains everything and nothing.

Not sure why you think that. You still have to explain precisely what is superdeterministic and what isn't, and elaborate how this works in a coherent way, and that process will end up telling you a lot. Is that really any different that accepting non-locality to explain spooky action at a distance?


Exactly so, its an interesting philosophical idea,but its by definition invalidates the scientific method, and all forms of meaningful reason. The correct answer to the question of if you are dreaming a dream from which you could never wake up isnt perhaps that explains why my math doesnt make sense, its: by premise that can not matter.


> Exactly so, its an interesting philosophical idea,but its by definition invalidates the scientific method, and all forms of meaningful reason

It doesn't, and Sabine specifically addresses this point.


The explanation of superdeterminism she gives here isn’t a “full” superdeterministic theory (nothing actually happens over time, there are no physics, the universe is just a movie being played back, etc) and she does say it only applies at the quantum level. So there’s still room for science.


Just means she hasn't thought of the inevitable consequences. Besides, the paradox she is resolving isn't a paradox, she just does not understand the answer in standard QM.


Seems like if you “thought of the consequences” and “understood QM” here you’d have a theory of everything all ready to go.


I do, it's called "quantum mechanics". It's the most thoroughly verified theory in the history of physics, and in my subjective opinion the most mathematically beautiful one. It is the greatest pinnacle of human scientific achievement.

But people think the idea that things might superimpose on each other like waves is weird, it's not what they see everyday objects that are 10,000,000,000x larger behaving, so it can't possibly be right, and they keep making up overcomplicated nonsense so that they can keep pretending that elementary particles behave like billiard balls.


What’s going on with gravity though? Actually, did they decide if the holographic principle is true yet?


I don't know, and a theory with a good answer to that may well supersede QM in the long run. But personally I don't find the idea of a standard model + spin-2 graviton so offensively complex the way some people apparently do, so I don't feel a lot of pressure to find a more unified theory of quantum gravity.


It's pretty hilarious that you think Sabine Hossenfelder doesn't understand standard quantum mechanics.


I'm from the many-worlds interpretation camp, and this "superdeterminism" business always strikes me as ironic.

Proponents of hidden variables, in their desire to explain QM effects, arrived at the idea that there is something that permeates the Universe since the Big Bang and participates in every physical interaction. Existence of this something cannot be directly proven - since we are "inside" of it.

How about the "wave function of the Universe"?


> Proponents of hidden variables, in their desire to explain QM effects, arrived at the idea that there is something that permeates the Universe since the Big Bang and participates in every physical interaction.

Actually that's not correct. In Bohmian mechanics the wave function is not actually real, it merely describes a law of motion. You're doing what many MWIers do, and applying the ontology of MWI to other ontologies and concluding they assume redundancies. That's just not how it works.


Their goal is to find the deeper explanation behind wave functions, so obviously going back to wave function, even that of the Universe, is not very appealing to them.


Well if they insist on expressing the dynamics of the universe as the minimisation of a global (or local) action functional, then they are already throwing in the towel because that implies a wavefunction of the universe.

I'm suspicious that these fringe quantum-physics-deniers just don't understand the mathematics. They want to be relevant but they don't want to learn, so they post wordy, fluffy blog posts targeted at non-professionals. Like Sabine.


Except she does not "deny" quantum mechanics, and she does understand the math.


Totally agree. It's therapeutic to read this.

Last I checked, many-worlds is a consequence of current quantum theory, not a postulate or additional axiom. Yet people treat it like such. Probably because it's 'spooky'. Then indeed it's ironic that they search for alternative explanations which try to say the same thing as many worlds but without saying 'many worlds'.


> Last I checked, many-worlds is a consequence of current quantum theory, not a postulate or additional axiom.

There is an additional postulate, namely that the state vector is the real world we inhabit. This may seem obvious to you and not amount to postulating much or anything with any substance, but that's a philosophical claim that isn't suggested by the physics.

There are also the problems of deriving the Born rule from the existing postulates of MWI. Last I checked, the existing derivations are not fully satisfactory to most physicists.


> There is an additional postulate, namely that the state vector is the real world we inhabit. This may seem obvious to you and not amount to postulating much or anything with any substance, but that's a philosophical claim that isn't suggested by the physics.

Any interpretation has to postulate that something "is" the real world, in that sense. Postulating that there's an additional entity dependent on the wavefunction that "is" the real world (as e.g. pilot-wave theories do) is violating Occam's Razor.

> There are also the problems of deriving the Born rule from the existing postulates of MWI.

There are, but again, all interpretations have that problem, and adding more postulates just makes it worse. E.g. if you take a Copenhagen interpretation you have to derive the probability rule and a rule for what constitutes a "measurement" that triggers when the rule should be applied.


> Postulating that there's an additional entity dependent on the wavefunction that "is" the real world (as e.g. pilot-wave theories do) is violating Occam's Razor.

Not really, because in Bohmian mechanics the wave function is nomological, ie. not real, and merely describes a law of motion. Bohmian mechanics is kind of the dual of many worlds in this sense, and so requires no more postulates.

> There are, but again, all interpretations have that problem

Since you mentioned Bohmian mechanics, the Born rule was derived from the postulates a long time ago, and the Bohmian form of Schrodinger's equation is structured in a such a way that the quantum component goes to zero in the limit, thus reducing to classical mechanics.


> Not really, because in Bohmian mechanics the wave function is nomological, ie. not real, and merely describes a law of motion.

I don't see that that's a meaningful/objective distinction? The wavefunction is certainly physically meaningful in the sense that you can't predict experimental results without computing its behaviour (or something equivalent to it). I don't think that you can reduce your number of postulates by declaring parts of your theory "not real" - given that elementary particles are not directly observable, couldn't we just declare that e.g. electrons are "not real" and merely describe a law of experimental results?

> Since you mentioned Bohmian mechanics, the Born rule was derived from the postulates a long time ago

It's not really derived, rather something equivalent to the Born rule is included as one of those postulates. At some point you have to go from wavefunction to probability distribution, and you either make the rule for that an outright postulate or have some plausible but unsatisfactory argument about how dynamical evolution makes this the "right" rule. Different interpretations do this at different points, but they all have to do it somewhere.


> The wavefunction is certainly physically meaningful in the sense that you can't predict experimental results without computing its behaviour (or something equivalent to it).

The wavefunction in Bohmian mechanics is just as interesting and unreal as the Hamiltonian in classical statistical mechanics. Which is to say that, sure, you still need something like it to calculate outcomes and what you observe will conform to it, but that doesn't make it real. The particles are real here, and they just follow a law of motion described by the wave function.

> given that elementary particles are not directly observable, couldn't we just declare that e.g. electrons are "not real" and merely describe a law of experimental results?

Sure, that's what ontology is all about: define what's real (base axioms), and derive what else we observe in terms of what you consider real. This needs to fit into a coherent total picture, and you want one that's general and parsimonious.

For MWI, the wavefunction is real and the grooves are real worlds that evolve in parallel, and the particles don't have any independent existence. In Bohmian mechanics, the particles are real and follow a law of motion described by the wave function, but the latter doesn't have any physical existence.

> It's not really derived, rather something equivalent to the Born rule is included as one of those postulates.

I don't think that's correct. To my knowledge, distributions conforming to the Born rule [1] are guaranteed in all but highly anomalous initial configurations, akin to how thermodynamics ensure that entropy always increases except again, in highly anomalous initial configurations. And even for the majority of anomalous distributions, evolution under the Bohmian dynamics is very likely to converge to the Born rule anyway [2].

Because this quantum equilibrium is a hypothesis and not a postulate, this leaves open the possibility that Bohmian mechanics can be experimentally differentiated from orthodox QM, but no one has figured out a way to actually create such distributions.

Anyway, this is all an interesting academic exercise and I don't think Bohmian mechanics is nearly as problematic as some physicists think, but it's unlikely to go anywhere given the little investment it receives.

[1] https://arxiv.org/abs/quant-ph/0308039

[2] https://arxiv.org/abs/1103.1589


> For MWI, the wavefunction is real and the grooves are real worlds that evolve in parallel, and the particles don't have any independent existence. In Bohmian mechanics, the particles are real and follow a law of motion described by the wave function, but the latter doesn't have any physical existence.

Obviously I'm coming at this from a partisan perspective, but that really does seem like more postulates - either you calculate your wavefunction evolution and predict your experimental results from that or you calculate that same wavefunction evolution, calculate your particle state ensemble from that, and then predict your experimental results from that.

> I don't think that's correct. To my knowledge, distributions conforming to the Born rule [1] are guaranteed in all but highly anomalous initial configurations, akin to how thermodynamics ensure that entropy always increases except again, in highly anomalous initial configurations.

I tried to skim through the 60 page paper but couldn't find the part you're claiming. Most modern presentations of Bohmian mechanics take the probability rule as a postulate. Bohm did initially present it with a statistical fluctuation and dynamical evolution argument, but most people find that unsatisfactory, and you can (and people do!) make the same argument in an Everett-style many worlds setting as well. (Admittedly people tend to find it even less convincing in a probability-branches setting than a particles-following-a-probability-distribution setting, but I suspect that's an artifact of similarity to classical thermodynamics rather than because it's objectively more plausible there).


I'll have to look into those derivations. Thanks.

> There is an additional postulate, namely that the state vector is the real world we inhabit.

Well, yes, it's a model for the physical world. Refusing to accept that the state vector is the real world we inhabit is tantamount to rejecting the existence of an objective universe, in which case any discussion is moot, or to the outright rejection of quantum theory, which seems irrational. (i.e. "I don't believe the state vector represents our world, despite it being the best physical model of our time")


> Well, yes, it's a model for the physical world. Refusing to accept that the state vector is the real world we inhabit is tantamount to rejecting the existence of an objective universe, in which case any discussion is moot

Some physicists consider the wave function to be ontologically inadequate to explain the physical world. See the discussion of "bohmian mechanics being many worlds in denial" for some details and references:

https://plato.stanford.edu/entries/qm-bohm/#ObjeResp


It may be inadequate, but the consequences of quantum theory still apply without having to add new postulates. Among those consequences is many worlds.


Saying it's ontologically inadequate means that it's insufficient to describe what we consider "real", and thus the wave function by itself cannot describe one world let alone many worlds.


There's is definitely an axiom built into many world's.

Something along the lines of "things we don't observe are as valid and exist as much as things we do observe"

That's an addition over the usual "things we observe exist"


Except we do observe interference effects. I'm not sure what you mean.


The interferences are part of those things that we observe and exist.


This is the first time I've read some way of looking at quantum physics that actually make sense. I hope it is indeed correct and all those ~smart people who were putting their ridiculously complex theory of quantum physics at our throats end up being ridiculed.


This reminds me of Feynman's famous quote: "If you think you understand quantum mechanics then you don't understand quantum mechanics." Frankly, I'm baffled how some scientists still clinge to classical physics and absolute determinism, just because they describe our macroscopic world nicely. People have tried to argue against quantum principles 100 years ago and some still do it today, even going so far as to believe that the entire universe conspires against us since the big bang (which is what supetdeterminism boils down to in light of things like entanglement) rather than accept the inconvenient truth: our universe is much stranger than classical physicists could have envisioned.


Going with "if it is not weird enough, it can't be right" doesn't seem very scientific to me.


Nobody does that. We follow the math and the experimental results. There is friction against accepting what the results tell us, which is the odd thing.


You're exactly the target audience for her content :) .


I suppose. It does seem pretty logical that statistical independence between measurement and particles could be violated, and I am just a newbie. If someone could explain why people overlooked this fact I would be very interested.


> Once you understand what’s going on with the double slit, all the other quantum effects that are allegedly mysterious or strange also make sense. Take for example a delayed choice experiment. In such an experiment, it’s only after the particle started its path that you decide whether to measure which slit it went through. And that gives the same result as the usual double slit experiment.

Delayed choice seems pretty silly in relativistic terms to me. If photons are massless, and distances they travel are 0, then there is no such thing deciding to measure the slit after the photon is emitted but before it reaches the slit, right? That's just a figment of our reference frame.


But reference frames do have physical significance (and the distance traveled does depend on one).


So, from the photon’s perspective (I know perspective isn’t right, but I’m not sure what else to call it), the distance traveled is 0, and the emitting and measuring happen instantaneously right?

What I’m struggling with is what it means to the experiment to move a measurement into the system between these events if they are at the same moment according to the thing being measured.


A frame of reference that moves at the speed of light is non-physical. For example, using it you could conclude that objects that have non-zero mass, too, all move at the speed of light...


There are other possible choices that reject superdeterminism but are still ok: https://arxiv.org/abs/1907.05607 -- you have to abandon either locality or the assumption that observed events exist absolutely

Personally I think it's easiest to reject the "absoluteness" of events and subscribe to RQM or QBism. Superdeterminism is a little boring if ultimately plausible.


The post suggests that carefully designed experiments could settle the question. I hope they get performed.


Superdeterminism needs some kind of simplicity, for lack of a better word. To take her vaccine trial analogy further, imagine that you randomly assign a group of 100 people to a treatment and control cell (50-50). It could happen that all 50 people in the control cell just happened to be the 50 most a priori healthy people in the group. But it’s extremely unlikely. Following the analogy, superdeterminism says it happens anyways. But the level of coincidence involved invokes such an absurdly complex series of coincidences, that your coin flip comes up heads on these particular people, that occam’s razor might as well spontaneously cease to exist.

Are there any superdeterministic theories that maintain the level of simplicity that the universe is posited to have without it?


> But the level of coincidence involved invokes such an absurdly complex series of coincidences, that your coin flip comes up heads on these particular people, that occam’s razor might as well spontaneously cease to exist.

Sure, but the experiments you have to run to see these absurd coincidence are also absurdly contrived, so is that really surprising?

It's worth noting that Sabine's take on superdeterminism is also a bit different than past takes that are rife with "conspiracy", so I suggest reading her papers.


The point is that it's only unlikely (or other situations likely) if the events, even from patient to patient, are random. If whether a patient is sick or not determined if they're in the placebo group or the healthy group, you would be testing exactly nothing. The randomness in placing the patient is a required part of the experiment. And more than random, the patient placement must be random and independent from their disease status (misdiagnosed healthy patients get treated too, after all. Generally it works quite well, as there isn't any problem to solve) and whether the treatment will work.

And of course superdeterministic theories are "simple". In the same sense that you can model any sequence as a function: just list all observed values for all time. Very simple, very little predictive power when you need it.

That also gives you the criticism of superdeterminism: if we do things that way, nothing needs an explanation, as it just can't be simplified from that full specification that we observe ... so there's no point to science. Note that it can still be more complex than we'll ever usefully realise, so whilst it means everything is determined, it does not necessarily (in fact quite unlikely imho) make it predictable for us limited beings.

But, in at least one meaning of the word, it's certainly simple. Every function in physics, whether quantum gravity or electroweak forces or the number of puppies your cat will have next year: f(x) = the x'th entry in the table ... (tables, of course, that you generally don't have access to)

Which isn't to say this is a strategy that can't work and perform useful functions. Take a robot for example, and look at inverse kinematics. That comes with 3d calculus, volume intersection (robot mustn't self-intersect), and there's not just limits in position but in speed, acceleration and torque as well. You can fully develop this theory. Or you can move the robot, observe all variables and save them to disk, then use nearest neighbour to figure out robot movement. Works like a charm, and doesn't even need to know how many arms the robot has. And while it takes a while, I bet it's a hell of a lot faster than building the theory for even the simplest of robots, never mind multi-armed or nonlinear robots. Plus it's trivial to make a machine learning algorithm do it for you, and the same can't be said of developing the correct theory for a new robot model.


The randomness here is a stand in for complexity. Substitute some other volume measure and the “probability” logic still works out, meaning that you can interpret “unlikely” as meaning “only happens in a teensy tiny sliver of phase space.” And with any reasonable way of trying to do randomness (coin flipping, PRNGs) it also means that it’s not just any sliver of phase space, it’s an extraordinarily complex sliver of it. Hence the original statement about how superdeterminism theories can potentially be. And hence the question about whether any of today’s superdeterministic theories avoid this extraordinary complexity issue.


[flagged]


She communicates with equations regularly in actual science papers: https://arxiv.org/a/hossenfelder_s_1.html


Of course. I assume that in those papers, she demonstrates a similar lack of understanding of the details. Or that she doesn't research on quite the same things she talks about in her public-facing content.

Someday I'll get around to reading some of them, and I'll find whether that assumption is wrong.


Because she's accessible and prolific. It doesn't matter if she's wrong, because there are not better qualified physicists who make youtube videos every week.


Honest question, why did you have to go ad hominem? If you look at her papers she’s obviously well versed in QFT and GR. You may disagree with her philosophical views but making the case that she doesn’t understand QM doesn’t pass the smell test.


I didn't commit an ad hominem fallacy. I did not say that her blog is invalid because of her character. I was just commenting on her character (actually more so on her work tbh).

i.e. an insult is not an ad hominem. This is an ad hominem: "Your argument is wrong because <insult to character>." This is not an ad hominem: "<insult to character. Particularly an insult that they often wrong>"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: