• 0 Posts
  • 80 Comments
Joined 5 months ago
cake
Cake day: July 7th, 2024

help-circle
  • No, they are not, they are incredibly wealthy millionaires whose campaigns are bought and paid for by billionaires. The Democrat party is actively supporting an ongoing holocaust, an industrial scale genocide and ethnic cleansing of millions of people from their homeland. The idea that these people are all secretly saints who are just too scared to act on it is such a completely ridiculous belief. They do not do moral things because they are not moral. They are not saints. They simply do not represent those values. You elect a party that openly believes X and then claim they don’t do Y because they’re too scared to do it. No, they don’t do Y because they don’t represent Y, they represent X. Democrats are by no means in any way “soft-willed.” Whenever it comes to something they actually believe in, they are very good at rallying the votes to get it passed, such as when they are passing something in favor of the military industrial complex or the Israel lobby.


  • Democrats are heartless genocidal freaks, and hardly “spineless” they just don’t care. It’s a party of billionaires. I have no idea how you can unironically believe this ethos that they’re all a bunch of bleeding hearts but are just too scared, quivering in their boots to act but they all mean well… apparently! No, they just never fight for those values you want them to fight for because their party does not represent those values, and pretending they do at this point… I have a bridge to sell you.






  • Quantum encryption won’t ever be a “thing.”

    All cryptography requires a pool of random numbers as inputs, and while different cryptographic methods are more secure than others, all of them are only as secure as their random number pool. The most secure cipher possible is known as a one-time pad which can be proven to be as secure as a cryptographic algorithm could possibly be, and so the only thing that could possibly lead to it being hacked is a poor random number pool. Since quantum mechanics can be used to generate truly random numbers, you could have a perfect random number pool, combined with a perfect cipher, gives you perfect encryption.

    That sounds awesome right? Well… no. Because it is trivially easy these days to get regular old classical computers to spit out basically an indefinite number of pseudorandom numbers that are indistinguishable from truly random numbers. Why do you think modern operating systems allow you to encrypt your whole drive? You can have a file tens of gigabytes bit and you click it and it opens instantly, despite your whole drive being encrypted, because your CPU can generate tens of gigabytes of random numbers good enough for cryptography faster than you can even blink.

    Random number generation is already largely a solved problem for classical computers. I own a quantum random number generator. I can compare it in various test suites such as the one released by NIST to test the quality of a random number generator, and it can’t tell the different between that and my CPU’s internal random number generator. Yes, the CPU. Most modern CPUs both have the ability to collect entropy data from thermal noise to seed a pseudorandom number generator, as well as having a hardware-level pseudorandom number, such as x86’s RDSEED and RDRAND instructions, so they can generate random numbers good enough for cryptography at blazing speeds.

    The point is that in practice you will never actually notice, even if you were a whole team of PhD statisticians and mathematicians, the difference between a message encrypted by a quantum computer and a message encrypted by a classical computer using an industry-approved library. Yet, it is not just that they’re equal, quantum encryption would be far worse. We don’t use one-time pads in practice despite their security because they require keys as long as the message itself, and thus if we adopted them, it would cut the whole internet bandwidth in half overnight. Pseudorandom number generators are superior to use as the basis for cryptography because the key can be very small and then it can spit out the rest of what is needed to encrypt/decrypt the message from it, and deterministic encryption/decryption algorithms like AES and ChaCha20 are not crackable even by a quantum computer.


  • Honestly, the random number generation on quantum computers is practically useless. Speeds will not get anywhere near as close to a pseudorandom number generator, and there are very simple ones you can implement that are blazing fast, far faster than any quantum computer will spit out, and produce numbers that are widely considered in the industry to be cryptographically secure. You can use AES for example as a PRNG and most modern CPUs like x86 processor have hardware-level AES implementation. This is why modern computers allow you to encrypt your drive, because you can have like a file that is a terabyte big that is encrypted but your CPU can decrypt it as fast as it takes for the window to pop up after you double-click it.

    While PRNG does require an entropy pool, the entropy pool does not need to be large, you can spit out terabytes of cryptographically secure pseudorandom numbers on a fraction of a kilobyte of entropy data, and again, most modern CPUs actually include instructions to grab this entropy data, such as Intel’s CPUs have an RDSEED instruction which let you grab thermal noise from the CPU. In order to avoid someone discovering a potential exploit, most modern OSes will mix into this pool other sources as well, like fluctuations in fan voltage.

    Indeed, used to with Linux, you had a separate way to read random numbers directly from the entropy pool and another way to read pseudorandom numbers, those being /dev/random and /dev/urandom. If you read from the entropy pool, if it ran out, the program would freeze until it could collect more, so some old Linux programs you would see the program freeze until you did things like move your mouse around.

    But you don’t see this anymore because generating enormous amounts of cryptographysically secure random nubmers is so easy with modern algorithms that modern Linux just collects a little bit of entropy at boot and it uses that to generate all pseudorandom numbers after, and just got rid of needing to read it directly, both /dev/random and /dev/urandom now just internally in the OS have the same behavior. Any time your PC needs a random number it just pulls from the pseudorandom number generator that was configured at boot, and you have just from the short window of collecting entropy data at boot the ability to generate sufficient pseudorandom numbers basically forever, and these are the numbers used for any cryptographic application you may choose to run.

    The point of all this is to just say random number generation is genuinely a solved problem, people don’t get just how easy it is to basically produce practically infinite cryptographically secure pseudorandom numbers. While on paper quantum computers are “more secure” because their random numbers would be truly random, in practice you literally would never notice a difference. If you gave two PhD mathematicians or statisticians the same message, one encrypted using a quantum random number generator and one encrypted with a PRNG like AES or ChaCha20, and asked them to decipher them, they would not be able to decipher either. In fact, I doubt they would even be able to identify which one was even encoded using the quantum random number generator. A string of random numbers looks just as “random” to any random number test suite whether or not it came from a QRNG or a high-quality PRNG (usually called CSPRNG).

    I do think at least on paper quantum computers could be a big deal if the engineering challenge can ever be overcome, but quantum cryptography such as “the quantum internet” are largely a scam. All the cryptographic aspects of quantum computers are practically the same, if not worse, than traditional cryptography, with only theoretical benefits that are technically there on paper but nobody would ever notice in practice.


  • It depends upon what you use ChatGPT for and if you know how to use it productively. For example if I ask ChatGPT coding questions it is often very helpful. If I ask it history questions it constantly makes things up. You also again need to know how to use it, like people who claim ChatGPT is not helpful for coding you ask them how they use it and they basically just ask ChatGPT to do their whole project for them and when it fails they claim it is useless. But that’s not the productive way to use it, the productive way to use it is like a replacement for StackOverflow or to provide you examples of how to use some library, or things like that, not doing your whole project for you. Of course, people often use it incorrectly so it’s probably not a good idea to allow its use in the workplace, but for individual use it can be very helpful.


  • the study that found the universe is not locally real. Things only happen once they are observed

    This is only true if you operate under a very specific and strict criterion of “realism” known as metaphysical realism. Einstein put forward a criterion of what he thought this philosophy implied for a physical theory, and his criterion is sometimes called scientific realism.

    Metaphysical realism is a very complex philosophy. One of its premises is that there exists an “absolute” reality where all objects are made up of properties that are independent of perspective. Everything we perceive is wholly dependent upon perspective, so metaphysical realism claims that what we perceive is not “true” reality but sort of an illusion created by the brain. “True” reality is then treated as the absolute spacetime filled with particles captured in the mathematics of Newton’s theory.

    The reason it relies on this premise is because by assigning objects perspective invariant properties, then they can continue to exist even if no other object is interacting with them, or, more specifically, they continue to exist even if “no one is looking at them.” For example, if you fire a cannonball from point A to point B, and you only observe it leaving point A and arriving at point B, Newtonian mechanics allows you to “track” its path between these two points even if you did not observe it.

    The problem is that you cannot do this in quantum mechanics. If you fire a photon from point A to point B, the theory simply disallows you from unambiguously filling in the “gaps” between the two points. People then declare that “realism is dead,” but this is a bit misleading because this is really only a problem for metaphysical/scientific realism. There are many other kinds of realism in literature.

    For example, the philosopher Jocelyn Benoist’s contextual realism argues that the exact opposite. The mathematical theory is not “true reality” but is instead a description of reality. A description of reality is not the same as reality. Would a description of the Eiffel Tower substitute actually seeing it in reality? Of course not, they’re not the same. Contextual realism instead argues that what is real is not the mathematical description but is precisely what we perceive. The reason we perceive reality in a way that depends upon perspective is because reality is just relative (or “contextual”). There is no “absolute” reality but only a contextual reality and that contextual reality we perceive directly as it really is.

    Thus for contextual realism, there is no issue with the fact that we cannot “track” things unambiguously, because it has no attachment to treating particles as if they persist as autonomous entities. It is perfectly fine with just treating it as if the particle hops from point A to point B according to some predictable laws and relative to the context in which the observer occupies. That is just how objective reality works. Observation isn’t important, and indeed, not even measurement, because whatever you observe in the experimental setting is just what reality is like in that context. The only thing that “arises” is your identification.


  • Why did physicists start using the word “real” and “realism”? It’s a philosophical term, not a physical one, and it leads to a lot of confusion. “Local” has a clear physical meaning, “realism” gets confusing. I have seen some papers that use “realism” in a way that has a clear physical definition, such as one I came across defined it in terms of a hidden variable theory. Yet, I also saw a paper coauthored by the great Anton Zeilinger that speaks of “local realism,” but very explicitly uses “realism” with its philosophical meaning, that there is an objective reality independent of the observer, which to me it is absurd to pretend that physics in any way calls this into account.

    If you read John Bell’s original paper “On the Einstein Podolsky Rosen Paradox,” he never once use the term “realism.” The only time I have seen “real” used at all in this early discourse is in the original EPR paper, but this was merely a “criterion” (meaning a minimum but not sufficient condition) for what would constitute a theory that is a complete description of reality. Einstein/Podolsky/Rosen in no way presented this as a definition of “reality” or a kind of “realism.”

    Indeed, even using the term “realism” on its own is ambiguous, as there are many kinds of “realisms” in the literature. The phrase “local realism” on its own is bound to lead to confusion, and it does, because I pointed out, even in the published literature physicists do not always use “realism” consistently. If you are going to talk about “realism,” you need to preface it to be clear what kind of realism you are specifically talking about.

    If the reason physicists started to talk about “realism” is because they specifically are referring to something that includes the EPR criterion, then they should call it “EPR realism” or something like that. Just saying “realism” is so absurdly ridiculous it is almost as if they are intentionally trying to cause confusion. I don’t really blame anyone who gets confused on this because like I said if you even read the literature there is not even consistent usage in the peer-reviewed papers.

    The phrase “observer-dependence” is also very popular in the published literature. So, while I am not disagreeing with you that “observation” is just an interaction, this is actually a rather uncommon position known as relational quantum mechanics.



  • A lot of people who present quantum mechanics to a laymen audience seem to intentionally present it to be as confusing as possible because they like the “mystery” behind it. Yet, it is also easy to present it in a trivially simple and boring way that is easy to understand.

    Here, I will tell you a simple framework that is just 3 rules and if you keep them in mind then literally everything in quantum mechanics makes sense and follows quite simply.

    1. Quantum mechanics is a probabilistic theory where, unlike classical probability theory, the probabilities of events can be complex-valued. For example, it is meaningful in quantum mechanics for an event to have something like a -70.7i% chance of occurring.
    2. The physical interpretation of complex-valued probabilities is that the further the probability is from zero, the more likely it is. For example, an event with a -70.7i% probability of occurring is more likely than one with a 50% probability of occurring because it is further from zero. (You can convert quantum probabilities to classical just by computing their square magnitudes, which is known as the Born rule.)
    3. If two events or more become statistically correlated with one another (this is known as “entanglement”) the rules of quantum mechanics disallows you from assigning quantum probabilities to the individual systems taken separately. You can only assign the quantum probabilities to the two events or more taken together. (The only way to recover the individual probabilities is to do something called a partial trace to compute the reduced density matrix.)

    If you keep those three principles in mind, then everything in quantum mechanics follows directly, every “paradox” is resolved, there is no confusion about anything.

    For example, why is it that people say quantum mechanics is fundamentally random? Well, because if the universe is deterministic, then all outcomes have either a 0% or 100% probability, and all other probabilities are simply due to ignorance (what is called “epistemic”). Notice how 0% and 100% have no negative or imaginary terms. They thus could not give rise to quantum effects.

    These quantum effects are interference effects. You see, if probabilities are only between 0% and 100% then they can only be cumulative. However, if they can be negative, then the probabilities of events can cancel each other out and you get no outcome at all. This is called destructive interference and is unique to quantum mechanics. Interference effects like this could not be observed in a deterministic universe because, in reality, no event could have a negative chance of occurring (because, again, in a deterministic universe, the only possible probabilities are 0% or 100%).

    If we look at the double-slit experiment, people then ask why does the interference pattern seem to go away when you measure which path the photon took. Well, if you keep this in mind, it’s simple. There’s two reasons actually and it depends upon perspective.

    If you are the person conducting the experiment, when you measure the photon, it’s impossible to measure half a photon. It’s either there or it’s not, so 0% or 100%. You thus force it into a definite state, which again, these are deterministic probabilities (no negative or imaginary terms), and thus it loses its ability to interfere with itself.

    Now, let’s say you have an outside observer who doesn’t see your measurement results. For him, it’s still probabilistic since he has no idea which path it took. Yet, the whole point of a measuring device is to become statistically correlated with what you are measuring. So if we go to rule #3, the measuring device should be entangled with the particle, and so we cannot apply the quantum probabilities to the particle itself, but only to both the particle and measuring device taken together.

    Hence, for the outside observer’s perspective, only the particle and measuring device collectively could exhibit quantum interference. Yet, only the particle passes through the two slits on its own, without the measuring device. Thus, they too would predict it would not interfere with itself.

    Just keep these three rules in mind and you basically “get” quantum mechanics. All the other fluff you hear is people attempting to make it sound more mystical than it actually is, such as by interpreting the probability distribution as a literal physical entity, or even going more bonkers and calling it a grand multiverse, and then debating over the nature of this entity they entirely made up.

    It’s literally just statistics with some slightly different rules.


  • I am saying that assigning ontological reality to something that is by definition beyond observation (not what we observe and not even possible to observe) is metaphysical. If we explain the experiment using what we observe then there is no confusing or contradiction, or any ambiguity at all. Indeed, quantum mechanics becomes rather mechanical and boring, all the supposed mysticism disappears.

    It is quite the opposite that the statistical behavior of the electron is decoupled from the individual electron. The individual electron just behaves randomly in a way that we can only predict statistically and not absolutely. There is no interference pattern at all for a single electron, at least not in the double-slit experiment (the Mach–Zehnder interferometer is arguably a bit more interesting). The interference pattern observed in the double-slit experiment is a weakly emergent behavior of an ensemble of electrons. You need thousands of them to actually see it.


  • I am factually correct, I am not here to “debate,” I am telling you how the theory works. When two systems interact such that they become statistically correlated with one another and knowing the state of one tells you the state of the other, it is no longer valid to assign a state vector to the system subsystems that are part of the interaction individually, you have to assign it to the system as a whole. When you do a partial trace on the system individually to get a reduced density matrix for the two systems, if they are perfectly entangled, then you end with a density matrix without coherence terms and thus without interference effects.

    This is absolutely entanglement, this is what entanglement is. I am not misunderstanding what entanglement is, if you think what I have described here is not entanglement but a superposition of states then you don’t know what a superposition of states is. Yes, an entangled state would be in a superposition of states, but it would be a superposition of states which can only be applied to both correlated systems together and not to the individual subsystems.

    Let’s say R = 1/sqrt(2) and Alice sends Bob a qubit. If the qubit has a probability of 1 of being the value 1 and Alice applies the Hadamard gate, it changes to R probability of being 0 and -R probability of being 1. In this state, if Bob were to apply a second Hadamard gate, then it undoes the first Hadamard gate and so it would have a probability of 1 of being a value of 1 due to interference effects.

    However, if an eavesdropper, let’s call them Eve, measures the qubit in transit, because R and -R are equal distances from the origin, it would have an equal chance of being 0 or 1. Let’s say it’s 1. From their point of view, they would then update their probability distribution to be a probability of 1 of being the value 1 and send it off to Bob. When Bob applies the second Hadamard gate, it would then have a probability of R for being 0 and a probability of -R for being 1, and thus what should’ve been deterministic is now random noise for Bob.

    Yet, this description only works from Eve’s point of view. From Alice and Bob’s point of view, neither of them measured the particle in transit, so when Bob received it, it still is probabilistic with an equal chance of being 0 and 1. So why does Bob still predict that interference effects will be lost if it is still probabilistic for him?

    Because when Eve interacts with the qubit, from Alice and Bob’s perspective, it is no longer valid to assign a state vector to the qubit on its own. Eve and the qubit become correlated with one another. For Eve to know the particle’s state, there has to be some correlation between something in Eve’s brain (or, more directly, her measuring device) and the state of the particle. They are thus entangled with one another and Alice and Bob would have to assign the state vector to Eve and the qubit taken together and not to the individual parts.

    Eve and the qubit taken together would have a probability distribution of R for the qubit being 0 and Eve knowing the qubit is 0, and a probability of -R of the qubit being 1 and Eve knowing the qubit is 1. There is still interference effects but only of the whole system taken together. Yet, Bob does not receive Eve and the qubit taken together. He receives only the qubit, so this probability distribution is no longer applicable to the qubit.

    He instead has to do a partial trace to trace out (ignore) Eve from the equation to know how his qubit alone would behave. When he does this, he finds that the probability distribution has changed to 0.5 for 0 and 0.5 for 1. In the density matrix representation, you will see that the density matrix has all zeroes for the coherences. This is a classical probability distribution, something that cannot exhibit interference effects.

    Bob simply cannot explain why his qubit loses its interference effects by Eve measuring it without Bob taking into account entanglement, at least within the framework of quantum theory. That is just how the theory works. The explanation from Eve’s perspective simply does not work for Bob in quantum mechanics. Reducing the state vector simultaneously between two different perspectives is known as an objective collapse model and makes different statistical predictions than quantum mechanics. It would not merely be an alternative interpretation but an alternative theory.

    Eve explains the loss of coherence due to her reducing the state vector due to seeing a definite outcome for the qubit, and Bob explains the loss of coherence due to Eve becoming entangled with the qubit which leads to decoherence as doing a partial trace to trace out (ignore) Eve gives a reduced density matrix for the qubit whereby the coherence terms are zero.


  • Schrödinger was not “rejecting” quantum mechanics, he was rejecting people treating things described in a superposition of states as literally existing in “two places at once.” And Schrödinger’s argument still holds up perfectly. What you are doing is equating a very dubious philosophical take on quantum mechanics with quantum mechanics itself, as if anyone who does not adhere to this dubious philosophical take is “denying quantum mechanics.” But this was not what Schrödinger was doing at all.

    What you say here is a popular opinion, but it just doesn’t make any sense if you apply any scrutiny to it, which is what Schrödinger was trying to show. Quantum mechanics is a statistical theory where probability amplitudes are complex-valued, so things can have a -100% chance of occurring, or even a 100i% chance of occurring. This gives rise to interference effects which are unique to quantum mechanics. You interpret what these probabilities mean in physical reality based on how far they are away from zero (the further from zero, the more probable), but the negative signs allow for things to cancel out in ways that would not occur in normal probability theory, known as interference effects. Interference effects are the hallmark of quantum mechanics.

    Because quantum probabilities have this difference, some people have wondered if maybe they are not probabilities at all but describe some sort of physical entity. If you believe this, then when you describe a particle as having a 50% probability of being here and a 50% probability of being there, then this is not just a statistical prediction but there must be some sort of “smeared out” entity that is both here and there simultaneously. Schrödinger showed that believing this leads to nonsense as you could trivially set up a chain reaction that scales up the effect of a single particle in a superposition of states to eventually affect a big system, forcing you to describe the big system, like a cat, in a superposition of states. If you believe particles really are “smeared out” here and there simultaneously, then you have to believe cats can be both “smeared out” here and there simultaneously.

    Ironically, it was Schrödinger himself that spawned this way of thinking. Quantum mechanics was originally formulated without superposition in what is known as matrix mechanics. Matrix mechanics is complete, meaning, it fully makes all the same predictions as traditional quantum mechanics. It is a mathematically equivalent theory. Yet, what is different about it is that it does not include any sort of continuous evolution of a quantum state. It only describes discrete observables and how they change when they undergo discrete interactions.

    Schrödinger did not like this on philosophical grounds due to the lack of continuity. There were discrete “gaps” between interactions. He criticized it saying that “I do not believe that the electron hops about like a flea” and came up with his famous wave equation as a replacement. This wave equation describes a list of probability amplitudes evolving like a wave in between interactions, and makes the same predictions as matrix mechanics. People then use the wave equation to argue that the particle literally becomes smeared out like a wave in between interactions.

    However, Schrödinger later abandoned this point of view because it leads to nonsense. He pointed in one of his books that while his wave equation gets rid of the gaps in between interactions, it introduces a new gap in between the wave and the particle, as the moment you measure the wave it “jumps” into being a particle randomly, which is sometimes called the “collapse of the wave function.” This made even less sense because suddenly there is a special role for measurement. Take the cat example. Why doesn’t the cat’s observation of this wave not cause it to “collapse” but the person’s observation does? There is no special role for “measurement” in quantum mechanics, so it is unclear how to even answer this in the framework of quantum mechanics.

    Schrödinger was thus arguing to go back to the position of treating quantum mechanics as a theory of discrete interactions. There are just “gaps” between interactions we cannot fill. The probability distribution does not represent a literal physical entity, it is just a predictive tool, a list of probabilities assigned to predict the outcome of an experiment. If we say a particle has a 50% chance of being here or a 50% chance of being there, it is just a prediction of where it will be if we were to measure it and shouldn’t be interpreted as the particle being literally smeared out between here and there at the same time.

    There is no reason you have to actually believe particles can be smeared out between here and there at the same time. This is a philosophical interpretation which, if you believe it, it has an enormous amount of problems with it, such as what Schrödinger pointed out which ultimately gets to the heart of the measurement problem, but there are even larger problems. Wigner had also pointed out a paradox whereby two observers would assign different probability distributions to the same system. If it is merely probabilities, this isn’t a problem. If I flip a coin and look at the outcome and it’s heads, I would say it has a 100% chance of being heads because I saw it as heads, but if I asked you and covered it up so you did not see it, you would assign a 50% probability of it being heads or tails. If you believe the wave function represents a physical entity, then you could setup something similar in quantum mechanics whereby two different observers would describe two different waves, and so the physical shape of the wave would have to differ based on the observer.

    There are a lot more problems as well. A probability distribution scales up in terms of its dimensions exponentially. With a single bit, there are two possible outcomes, 0 and 1. With two bits, there’s four possible outcomes, 00, 01, 10, and 11. With three bits, eight outcomes. With four bits, sixteen outcomes. If we assign a probability amplitude to each possible outcome, then the number of degrees of freedom grows exponentially the more bits we have under consideration.

    This is also true in quantum mechanics for the wave function, since it is again basically a list of probability amplitudes. If we treat the wave function as representing a physical wave, then this wave would not exist in our four-dimensional spacetime, but instead in an infinitely dimensional space known as a Hilbert space. If you want to believe the universe actually physically made up of infinitely dimensional waves, have at ya. But personally, I find it much easier to just treat a probability distribution as, well, a probability distribution.


  • What is it then? If you say it’s a wave, well, that wave is in Hilbert space which is infinitely dimensional, not in spacetime which is four dimensional, so what does it mean to say the wave is “going through” the slit if it doesn’t exist in spacetime? Personally, I think all the confusion around QM stems from trying to objectify a probability distribution, which is what people do when they claim it turns into a literal wave.

    To be honest, I think it’s cheating. People are used to physics being continuous, but in quantum mechanics it is discrete. Schrodinger showed that if you take any operator and compute a derivative, you can “fill in the gaps” in between interactions, but this is just purely metaphysical. You never see these “in between” gaps. It’s just a nice little mathematical trick and nothing more. Even Schrodinger later abandoned this idea and admitted that trying to fill in the gaps between interactions just leads to confusion in his book Nature and the Greeks and Science and Humanism.

    What’s even more problematic about this viewpoint is that Schrodinger’s wave equation is a result of a very particular mathematical formalism. It is not actually needed to make correct predictions. Heisenberg had developed what is known as matrix mechanics whereby you evolve the observables themselves rather than the state vector. Every time there is an interaction, you apply a discrete change to the observables. You always get the right statistical predictions and yet you don’t need the wave function at all.

    The wave function is purely a result of a particular mathematical formalism and there is no reason to assign it ontological reality. Even then, if you have ever worked with quantum mechanics, it is quite apparent that the wave function is just a function for picking probability amplitudes from a state vector, and the state vector is merely a list of, well, probability amplitudes. Quantum mechanics is probabilistic so we assign things a list of probabilities. Treating a list of probabilities as if it has ontological existence doesn’t even make any sense, and it baffles me that it is so popular for people to do so.

    This is why Hilbert space is infinitely dimensional. If I have a single qubit, there are two possible outcomes, 0 and 1. If I have two qubits, there are four possible outcomes, 00, 01, 10, and 11. If I have three qubits, there are eight possible outcomes, 000, 001, 010, 011, 100, 101, 110, and 111. If I assigned a probability amplitude to each event occurring, then the degrees of freedom would grow exponentially as I include more qubits into my system. The number of degrees of freedom are unbounded.

    This is exactly how Hilbert space works. Interpreting this as a physical infinitely dimensional space where waves really propagate through it just makes absolutely no sense!


  • It is weird that you start by criticizing our physical theories being descriptions of reality then end criticizing the Copenhagen interpretation, since this is the Copenhagen interpretation, which says that physics is not about describing nature but describing what we can say about nature. It doesn’t make claims about underlying ontological reality but specifically says we cannot make those claims from physics and thus treats the maths in a more utilitarian fashion.

    The only interpretation of quantum mechanics that actually tries to interpret it at face value as a theory of the natural world is relational quantum mechanics which isn’t that popular as most people dislike the notion of reality being relative all the way down. Almost all philosophers in academia define objective reality in terms of something being absolute and point-of-view independent, and so most academics struggle to comprehend what it even means to say that reality is relative all the way down, and thus interpreting quantum mechanics as a theory of nature at face-value is actually very unpopular.

    All other interpretations either: (1) treat quantum mechanics as incomplete and therefore something needs to be added to it in order to complete it, such as hidden variables in the case of pilot wave theory or superdeterminism, or a universal psi with some underlying mathematics from which to derive the Born rule in the Many Worlds Interpretation, or (2) avoid saying anything about physical reality at all, such as Copenhagen or QBism.

    Since you talk about “free will,” I suppose you are talking about superdeterminism? Superdeterminism works by pointing out that at the Big Bang, everything was localized to a single place, and thus locally causally connected, so all apparent nonlocality could be explained if the correlations between things were all established at the Big Bang. The problem with this point of view, however, is that it only works if you know the initial configuration of all particles in the universe and a supercomputer powerful to trace them out to modern day.

    Without it, you cannot actually predict any of these correlations ahead of time. You have to just assume that the particles “know” how to correlate to one another at a distance even though you cannot account for how this happens. Mathematically, this would be the same as a nonlocal hidden variable theory. While you might have a nice underlying philosophical story to go along with it as to how it isn’t truly nonlocal, the maths would still run into contradictions with special relativity. You would find it difficult to construe the maths in such a way that the hidden variables would be Lorentz invariant.

    Superdeterministic models thus struggle to ever get off the ground. They only all exist as toy models. None of them can reproduce all the predictions of quantum field theory, which requires more than just accounting for quantum mechanics, but doing so in a way that is also compatible with special relativity.



  • Personally, I think there is a much bigger issue with the quantum internet that is often not discussed and it’s not just noise.

    Imagine, for example, I were to offer you two algorithms. One can encrypt things so well that it would take a hundred trillion years for even a superadvanced quantum computer to break the encryption, and it almost has no overhead. The other is truly unbreakable even in an infinite amount of time, but it has a huge amount of overhead to the point that it will cut your bandwidth in half.

    Which would you pick?

    In practice, there is no difference between an algorithm that cannot be broken for trillions of years, and an algorithm that cannot be broken at all. But, in practice, cutting your internet bandwidth in half is a massive downside. The tradeoff just isn’t worth it.

    All quantum “internet” algorithms suffer from this problem. There is always some massive practical tradeoff for a purely theoretical benefit. Even if we make it perfectly noise-free and entirely solve the noise problem, there would still be no practical reason at all to adopt the quantum internet.