• 0 Posts
  • 16 Comments
Joined 9 months ago
cake
Cake day: March 16th, 2025

help-circle
  • We all have an intuitive grasp of 0, it’s just when you define what it looks like for something to change in a particular situation, i.e. you define what x>0 or x<0 looks like, and then x=0 is just when it hasn’t changed at all.

    I feel like this discussion is getting too philosophical. My point isn’t that deep, it is just to keep complex numbers intuitive and tied to physical reality. We shouldn’t treat imaginary numbers like some out there, almost mystical thing we should just accept at face value. When we realize that they are mathematically equivalent to a set of operations on a vector of two real numbers, we can then get an intuitive understanding of what they actually represent in the real world. You can visualize a complex number as a vector representing on a plane (called the complex plane), and then visualize operations on the complex numbers as manipulations of that vector.

    The Fourier transform has complex numbers in it. This isn’t mysterious, it’s just that the Fourier transform deals with waves, and waves are two-dimensional, so they need to be described by a vector of two numbers. The Fourier form effectively wraps the wave around a circle, and if the rate of wrapping is different from the wavelength of the wave, then every time you complete a revolution of the circle, you will either have overshot or undershot a complete cycle of the wave, causing your second wrapping to be off-center, and if you repeat this indefinitely, then all the off-center wrappings will cancel each other out, giving you 0 in the limit. But if the rate of wrapping is equivalent to the wavelength, then a revolution around the circle would exactly correspond to a cycle of the wave, so it would you would not get this cancelling and it would blow up to infinity in the limit.

    You can get very intuitive mental images of what complex numbers are actually doing when you recognize this. There shouldn’t be a layer of mystery put on top of them. People often act like they are something so mysterious we just have to accept at face-value, and others will even justify this by pointing out that they’re used in quantum mechanics, and quantum theory is “weird,” therefore we should just accept this weird thing at face-value and not question it.

    All I am trying to point out is that complex numbers are not “weird,” they have clear meaning you can visualize them and get an intuition for them, and the reason they show up in certain equations always has very good and intuitive explanations for it. I am not making a deep philosophical point here. I am only arguing against the notion of obfuscating the meaning of imaginary numbers. The term “imaginary” is honestly not a good name. Complex numbers probably should just be called 2D numbers, with the real and imaginary components called the X and Y component or something like that. They are just a way of concisely representing something that is two-dimensional. There are also quaternions which are 4D numbers.



  • Because your arguments are just bizarre. Imaginary numbers do not have a priori definitions. Humans have to define imaginary number and define the mathematical operations on them. There is no “hostile confusion” or “flaw,” there is you making the equivalent of flat-earth arguments but for mathematics. You keep claiming things that are objectively false and so obviously false it is bizarre how anyone could even make such a claim. I do not even know how to approach it, how on earth do you come to believe that complex numbers have a priori definitions and they aren’t just humans defining them like any other mathematical operation? There are no pre-given definitions for complex numbers, their properties are all explicitly defined by human beings, and you can also define the properties on vectors. You at first claim that supposedly you can only do certain operations on complex numbers that you cannot on vectors, I point out this is obviously false and you can’t give a single counter-example, so now you switch to claiming somehow the operations on complex numbers are all “pre-given.” Makes zero sense. You have not pointed out a “flaw,” you just ramble and declare victory, throwing personal attacks calling me “confused” like this is some sort of competition or something when you have not even made a single coherent point. Attacking me and downvoting all my posts isn’t going to somehow going to prove that you cannot decompose any complex-valued operations into real numbers, nor is it going to prove that complex numbers somehow don’t have to have their properties and operations on them postulated just like real numbers.


  • And you can also just write it out using real numbers if you wish, it’s just more mathematically concise to use complex numbers. It’s a purely subjective, personal choice to choose to use complex-valued notation. You are trying to argue that making a personal, subjective, arbitrary choice somehow imposes something upon physical reality. It doesn’t. There isn’t anything wrong with the standard formulation, but it is a choice of convention, and conventions aren’t physical. If I describe my losses in a positive number, and then later change convention and describe my winnings with a negative number, the underlying physical reality has not changed, it’s not going to suddenly transmute into something else because of a change in convention in how I describe it.

    The complex numbers in quantum theory are not magic. They are also popular in classical mechanics as well, and are just quite common in wave mechanics in general (classical or quantum). In classical wave mechanics, in classical computer science, we use the Fourier transform a lot which is typically expressed as a complex number. It’s because waves have two degrees of freedom, and so you could describe them using a vector of two real numbers, or you could describe them using complex numbers. People like the complex-valued notation because it’s more concise to write down and express formulas in, but at the end of the day it’s just a convention, a notation created by human beings which many other mathematically equivalent notations can describe the same exact thing.


  • I am having genuine difficulty imagining in your head how you think you made a point here. It seems you’re claiming that given if two vectors have the same symbols between them, they should have identical output, such as (a,b) * (c,d) should have the same mathematical definition as (a+bi) * (c+di), or complex numbers are not reducible to real numbers.

    You realize mathematical symbols are just conventions, right? They were not handed down to us from Zeus almighty. They are entirely human creations. I can happily define the meaning of (a,b) * (c,d) to be (ac-bd,ad+bc) and now it is mathematically well-defined and gives identical results.


  • Negative numbers are just real numbers with a symbol attached. Yes, that’s literally true. In computer code we only ever deal with 0s and 1s. We come up with a convention to represent negative numbers, they are still ultimately zeros and ones but we just say “zeros and ones in this form represent a negative number,” usually just by having the most significant bit 1. They are no physical negative numbers floating out there in the world like in a Platonic sense. What we call “negative” is contextual. It depends upon how we frame a problem and how we interpret a situation. You can lose money at a casino and say your earnings are now negative, or you can say your losses are now positive. Zeus isn’t going to strike you down for saying one over the other. There is nothing physically dictating what convention you use. You just use which convention you find most intuitive and mathematically convenient given the problem you’re trying to describe.

    Yes, when we are talking about how computers work, we are talking about how numbers actually manifest in objective, physical reality. They are not some magical substance floating out there in the Platonic realm. Whenever we actually go to implement complex numbers or even negative in the real world, whenever we try to construct a physical system that replicates their behavior and can perform calculations on a physical level, we always just use unsigned real numbers (or natural numbers), and then later establish signage and complexity as conventions combined with a set of operations on how they should behave.

    I’m not sure your point about fractional numbers. If you mean literally a/b, yes, there is software that treats a/b as just two natural numbers stitched together, but it’s actually a bit mathematically complicated to always keep things in fractional form, so that’s incredibly rare and you’d only see it in very specialized math software. Usually it’s represented with a floating point number. In a digital computer that number is an approximation as it’s ultimately digital, but I wouldn’t say that means only digital numbers are physical, because we can also construct analogue computers that can do useful computations and are not digital. Unless we discover that space is quantized and thus they were digital all along, then I do think it is meaningful to treat real numbers as, well, physically real, because we can physically implement them.



  • A complex number is just two real numbers stitched together. It’s used in many areas, such as the Fourier transform which is common in computer science is often represented with complex numbers because it deals with waves and waves are two-dimensional, and so rather than needing two different equations you can represent it with a single equation where the two-dimensional behavior occurs on the complex-plane.

    In principle you can always just split a complex number into two real numbers and carry on the calculation that way. In fact, if we couldn’t, then no one would use complex numbers, because computers can’t process imaginary numbers directly. Every computer program that deals with complex numbers, behind the scenes, is decomposing it into two real-valued floating point numbers.


  • Many-worlds is nonsensical mumbo jumbo. It doesn’t even make sense without adding an additional unprovable postulate called the universal wave function. Every paper just has to assume it without deriving it from anywhere. If you take MWI and subtract away this arbitrary postulate then you get RQM. MWI - big psi = RQM. So RQM is inherently simpler.

    Although the simplest explanation isn’t even RQM, but to drop the postulate that the world is time-asymmetric. If A causes B and B causes C, one of the assumptions of Bell’s theorem is that it would be invalid to say C causes B which then causes A, even though we can compute the time-reverse in quantum mechanics and there is nothing in the theory that tells us the time-reverse is not equally valid.

    Indeed, that’s what unitary evolution means. Unitarity just means time-reversibility. You test if an operator is unitary by multiplying it by its own time-reverse, and if it gives you the identity matrix, meaning it completely cancels itself out, then it’s unitary.

    If you just accept time-symmetry then it is just as valid to say A causes B as it is to say C causes B, as B is connected to both through a local causal chain of events. You can then imagine that if you compute A’s impact on B it has ambiguities, and if you compute C’s impact on B it also has ambiguities, but if you combine both together the ambiguities disappear and you get an absolutely deterministic value for B.

    Indeed, it turns out quantum mechanics works precisely like this. If you compute the unitary evolution of a system from a known initial condition to an intermediate point, and the time-reverse of a known final condition to that intermediate point, you can then compute the values of all the observables at that intermediate point. If you repeat this process for all observables in the experiment, you will find that they evolve entirely locally and continuously. Entangled particles form their correlations when they locally interact, not when you later measure them.

    But for some reason people would rather believe in an infinite multiverse than just accept that quantum mechanics is not a time-asymmetric theory.



  • That’s a classical ambiguity, not a quantum ambiguity. It would be like if I placed a camera that recorded when cars arrived but I only gave you information on when it detected a car and at what time and no other information, not even providing you with the footage, and asked you to derive which car came first. You can’t because that’s not enough information.

    The issue here isn’t a quantum mechanical one but due to the resolution of your detector. In principle if it was precise enough, because the radiation emanates from different points, you could figure out which one is first because there would be non-overlapping differences. This is just a practical issue due to the low resolution of the measuring device, and not a quantum mechanical ambiguity that couldn’t be resolved with a more precise measuring apparatus.

    A more quantum mechanical example is something like if you apply the H operator twice in a row and then measure it, and then ask the value of the qubit after the first application. It would be in a superposition of states which describes both possibilities symmetrically so the wavefunction you derive from its forwards-in-time evolution is not enough to tell you anything about its observables at all, and if you try to measure it at the midpoint then you also alter the outcome at the final point, no matter how precise the measuring device is.


  • Let’s say the initial state is at time t=x, the final state is at time t=z, and the state we’re interested in is at time t=y where x < y < z.

    In classical mechanics you condition on the initial known state at t=x and evolve it up to the state you’re interested in at t=y. This works because the initial state is a sufficient constraint in order to guarantee only one possible outcome in classical mechanics, and so you don’t need to know the final state ahead of time at t=z.

    This does not work in quantum mechanics because evolving time in a single direction gives you ambiguities due to the uncertainty principle. In quantum mechanics you have to condition on the known initial state at t=x and the known final state at t=z, and then evolve the initial state forwards in time from t=x to t=y and the final state backwards in time from t=z to t=y where they meet.

    Both directions together provide sufficient constraints to give you a value for the observable.

    I can’t explain it in more detail than that without giving you the mathematics. What you are asking is ultimately a mathematical question and so it demands a mathematical answer.


  • I am not that good with abstract language. It helps to put it into more logical terms.

    It sounds like what you are saying is that you begin with something a superposition of states like (1/√2)(|0⟩ + |1⟩) which we could achieve with the H operator applied to |0⟩ and then you make that be the cause of something else which we would achieve with the CX operator and would give us (1/√2)(|00⟩ + |11⟩) and then measure it. We can call these t=0 starting in the |00⟩ state, then t=1 we apply H operator to the least significant, and then t=2 is the CX operator with the control on the least significant.

    I can’t answer it for the two cats literally because they are made up it a gorillion particles and computing it for all of them would be computationally impossible. But in this simple case you would just compute the weak values which requires you to also condition on the final state which in this case the final states could be |00⟩ or |11⟩. For each observable, let’s say we’re interested in the one at t=x, you construct your final state vector by starting on this final state, specifically its Hermitian transpose, and multiplying it by the reversed unitary evolution from t=2 to t=x and multiply that by the observable then multiply that by the forwards-in-time evolution from t=0 to t=x multiplied by the initial state, and then normalize the whole thing by dividing it by the Hermitian transpose of the final state times the whole reverse time evolution from t=2 to t=0 and then by the final state.

    In the case where the measured state at t=3 is |00⟩ we get for the observables (most significant followed by least significant)…

    • t=0: (0,0,+1);(+1,+i,+1)
    • t=1: (0,0,+1);(+1,-i,+1)
    • t=2: (0,0,+1);(0,0,+1)

    In the case where the measured state at t=3 is |11⟩ we get for the observables…

    • t=0: (0,0,+1);(-1,-i,+1)
    • t=1: (0,0,+1);(+1,+i,-1)
    • t=2: (0,0,-1);(0,0,-1)

    The values |0⟩ and |1⟩ just mean that the Z observable has a value of +1 or -1, so if we just look at the values of the Z observables we can rewrite this in something a bit more readable.

    • |00⟩ → |00⟩ → |00⟩
    • |00⟩ → |01⟩ → |11⟩

    Even though the initial conditions both began at |00⟩ they have different values on their other observables which then plays a role in subsequent interactions. The least significant qubit in the case where the final state is |00⟩ begins with a different signage on its Y observable than in the case when the outcome is |11⟩. That causes the H opreator to have a different impact, in one case it flips the least significant qubit and in another case it does not. If it gets flipped then, since it is the control for the CX operator, it will flip the most significant qubit as well, but if it’s not then it won’t flip it.

    Notice how there is also no t=3, because t=3 is when we measure, and the algorithm guarantees that the values are always in the state you will measure before you measure them. So your measurement does reveal what is really there.

    If we say |0⟩ = no sleepy gas is released and the cat is awake, and |1⟩ = sleepy gas is released and the cat go sleepy time, then in the case where both cats are observed to be awake when you opened the box, at t=1: |00⟩ meaning the first one’s sleepy gas didn’t get released, and so at t=2: |00⟩ it doesn’t cause the other one’s to get released. In the case where both cats are observed to be asleep when you open the box, then t=1: |01⟩ meaning the first one’s did get released, and at t=2: |11⟩ that causes the second’s to be released.

    When you compute this algorithm you find that the values of the observables are always set locally. Whenever two particles interact such that they become entangled, then they will form correlations for their observables in that moment and not later when you measure them, and you can even figure out what those values specifically are.

    To borrow an analogy I heard from the physicist Emily Adlam, causality in quantum mechanics is akin to filling out a Sudoku puzzle. The global rules and some “known” values constrains the puzzle so that you are only capable of filling in very specific values, and so the “known” values plus the rules determine the rest of the values. If you are given the initial and final conditions as your “known” values plus the laws of quantum mechanics as the global rules constraining the system, then there is only one way you can fill in these numbers, those being the values for the observables.


  • “Free will” usually refers to the belief that your decisions cannot be reduced to the laws of physics (e.g. people who say “do you really think your thoughts are just a bunch of chemical reactions in the brain???”), either because they can’t be reduced at all or that they operate according to their own independent logic. I see no reason to believe that and no evidence for it.

    Some people try to bring up randomness but even if the universe is random that doesn’t get you to free will. Imagine if the state forced you to accept a job for life they choose when you turn 18, and they pick it with a random number generator. Is that free will? Of course not. Randomness is not relevant to free will. I think the confusion comes from the fact that we have two parallel debates of “free will vs determinism” and “randomness vs determinism” and people think they’re related, but in reality the term “determinism” means something different in both contexts.

    In the “free will vs determinism” debate we are talking about nomological determinism, which is the idea that reality is reducible to the laws of physics and nothing more. Even if those laws may be random, it would still be incompatible with the philosophical notion of “free will” because it would still be ultimately the probabilistic mathematical laws that govern the chemical reactions in your brain that cause you to make decisions.

    In the “randomness vs determinism” debate we are instead talking about absolute determinism, sometimes also called Laplacian determinism, which is the idea that if you fully know the initial state of the universe you could predict the future with absolute certainty.

    These are two separate discussions and shouldn’t be confused with one another.


  • In a sense it is deterministic. It’s just when most people think of determinism, they think of conditioning on the initial state, and that this provides sufficient constraints to predict all future states. In quantum mechanics, conditioning on the initial state does not provide sufficient constraints to predict all future states and leads to ambiguities. However, if you condition on both the initial state and the final state, you appear to get determinstic values for all of the observables. It seems to be deterministic, just not forwards-in-time deterministic, but “all-at-once” deterministic. Laplace’s demon would just need to know the very initial conditions of the universe and the very final conditions.


  • Many Worlds is an incredibly bizarre point of view.

    Quantum mechanics has two fundamental postulates, that being the Schrodinger equation and the Born rule. It’s impossible to get rid of the Born rule in quantum mechanics as shown by Gleason’s Theorem, it’s an inevitable consequence of the structure of the theory. But Schrodinger’s equation implies that systems can undergo unitary evolution in certain contexts, whereas the Born rule implies systems can undergo non-unitary evolution in other contexts.

    If we just take this as true at face value, then it means the wave function is not fundamental because it can only model unitary evolution, hence why you need the measurement update hack to skip over non-unitary transformations. It is only a convenient shorthand for when you are solely dealing with unitary evolution. The density matrix is then more fundamental because it is a complete description which can model both unitary and non-unitary transformations without the need for measurement update, “collapse,” and does so continuously and linearly.

    However, MWI proponents have a weird unexplained bias against the Born rule and love for unitary evolution, so they insist the Born rule must actually just be due to some error in measurement, and that everything actually evolves unitarily. This is trivially false if you just take quantum mechanics at face value. The mathematics at face value unequivocally tells you that both kinds of evolution can occur under different contexts.

    MWI tries to escape this by pointing out that because it’s contextual, i.e. “perspectival,” you can imagine a kind of universal perspective where everything is unitary. For example, in the Wigner’s friend scenario, for his friend, he would describe the particle undergoing non-unitary evolution, but for Wigner, he would describe the system as still unitary from his “outside” perspective. Hence, you can imagine a cosmic, godlike perspective outside of everything, and from it, everything would always remain unitary.

    The problem with this is Hilbert space isn’t a background space like Minkowski space where you can apply a perspective transformation to something independent of any physical object, which is possible with background spaces because they are defined independently of the relevant objects. Hilbert space is a constructed space which is defined dependently upon the relevant objects. Two different objects described with two different wave functions would be elements of different Hilbert spaces.

    That means perspective transformations are only possible to the perspective of other objects within your defined Hilbert space, you cannot adopt a “view from nowhere” like you can with a background space, so there is just nothing in the mathematics of quantum mechanics that could ever allow you to mathematically derive this cosmic perspective of the universal wave function. You could not even define it, because, again, a Hilbert space is defined in terms of the objects it contains, and so a Hilbert space containing the whole universe would require knowing the whole universe to even define it.

    The issue is that this “universal wave function” is neither mathematically definable nor derivable, so it only has to be postulated, as well as its mathematical properties postulates, as a matter of fiat. Every single paper on MWI ever just postulates it entirely by fiat and defines by fiat what its mathematical properties are. Because the Born rule is inevitable form the logical structure of quantum theory, these mathematical properties always include something basically just the same as the Born rule but in a more roundabout fashion.

    None of this plays any empirical role in the real world. The only point of the universal wave function is so that whenever you perceive non-unitary evolution, you can clasp your hands together and pray, “I know from the viewpoint of the great universal wave function above that is watching over us all, it is still unitary!” If you believe this, it still doesn’t play any role in how you would carry out quantum mechanics, because you don’t have access to it, so you still have to treat it as if from your perspective it’s non-unitary.